modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-16 06:27:54
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 522
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-16 06:27:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
farazfo/ana-text-encoder2 | farazfo | 2024-09-16T12:01:37Z | 24 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-09-16T11:49:00Z | ---
base_model: CompVis/stable-diffusion-v1-4
library_name: diffusers
license: creativeml-openrail-m
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
inference: true
instance_prompt: a photo of sks person
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - farazfo/ana-text-encoder2
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of sks person using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: True.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
mariosorri/Meta-Llama-3.1-8B-Instruct-Q4_0-GGUF | mariosorri | 2024-09-16T12:01:25Z | 19 | 0 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:quantized:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-09-16T12:01:04Z | ---
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
license: llama3.1
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-cpp
- gguf-my-repo
extra_gated_prompt: "### LLAMA 3.1 COMMUNITY LICENSE AGREEMENT\nLlama 3.1 Version\
\ Release Date: July 23, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Llama 3.1 distributed by Meta at https://llama.meta.com/doc/overview.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf), of\
\ the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Llama 3.1\"\
\ means the foundational large language models and software and algorithms, including\
\ machine-learning model code, trained model weights, inference-enabling code, training-enabling\
\ code, fine-tuning enabling code and other elements of the foregoing distributed\
\ by Meta at https://llama.meta.com/llama-downloads.\n\"Llama Materials\" means,\
\ collectively, Meta’s proprietary Llama 3.1 and Documentation (and any portion\
\ thereof) made available under this Agreement.\n\"Meta\" or \"we\" means Meta Platforms\
\ Ireland Limited (if you are located in or, if you are an entity, your principal\
\ place of business is in the EEA or Switzerland) and Meta Platforms, Inc. (if you\
\ are located outside of the EEA or Switzerland).\n \n1. License Rights and Redistribution.\n\
a. Grant of Rights. You are granted a non-exclusive, worldwide, non-transferable\
\ and royalty-free limited license under Meta’s intellectual property or other rights\
\ owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy,\
\ create derivative works of, and make modifications to the Llama Materials.\nb.\
\ Redistribution and Use.\ni. If you distribute or make available the Llama Materials\
\ (or any derivative works thereof), or a product or service (including another\
\ AI model) that contains any of them, you shall (A) provide a copy of this Agreement\
\ with any such Llama Materials; and (B) prominently display “Built with Llama”\
\ on a related website, user interface, blogpost, about page, or product documentation.\
\ If you use the Llama Materials or any outputs or results of the Llama Materials\
\ to create, train, fine tune, or otherwise improve an AI model, which is distributed\
\ or made available, you shall also include “Llama” at the beginning of any such\
\ AI model name.\nii. If you receive Llama Materials, or any derivative works thereof,\
\ from a Licensee as part of an integrated end user product, then Section 2 of\
\ this Agreement will not apply to you.\niii. You must retain in all copies of the\
\ Llama Materials that you distribute the following attribution notice within a\
\ “Notice” text file distributed as a part of such copies: “Llama 3.1 is licensed\
\ under the Llama 3.1 Community License, Copyright © Meta Platforms, Inc. All Rights\
\ Reserved.”\niv. Your use of the Llama Materials must comply with applicable laws\
\ and regulations (including trade compliance laws and regulations) and adhere to\
\ the Acceptable Use Policy for the Llama Materials (available at https://llama.meta.com/llama3_1/use-policy),\
\ which is hereby incorporated by reference into this Agreement.\n2. Additional\
\ Commercial Terms. If, on the Llama 3.1 version release date, the monthly active\
\ users of the products or services made available by or for Licensee, or Licensee’s\
\ affiliates, is greater than 700 million monthly active users in the preceding\
\ calendar month, you must request a license from Meta, which Meta may grant to\
\ you in its sole discretion, and you are not authorized to exercise any of the\
\ rights under this Agreement unless or until Meta otherwise expressly grants you\
\ such rights.\n3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE\
\ LLAMA MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS”\
\ BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY\
\ KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES\
\ OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.\
\ YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING\
\ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA\
\ MATERIALS AND ANY OUTPUT AND RESULTS.\n4. Limitation of Liability. IN NO EVENT\
\ WILL META OR ITS AFFILIATES BE LIABLE UNDER ANY THEORY OF LIABILITY, WHETHER IN\
\ CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS\
\ AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL,\
\ EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED\
\ OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n5. Intellectual Property.\na. No\
\ trademark licenses are granted under this Agreement, and in connection with the\
\ Llama Materials, neither Meta nor Licensee may use any name or mark owned by or\
\ associated with the other or any of its affiliates, except as required for reasonable\
\ and customary use in describing and redistributing the Llama Materials or as set\
\ forth in this Section 5(a). Meta hereby grants you a license to use “Llama” (the\
\ “Mark”) solely as required to comply with the last sentence of Section 1.b.i.\
\ You will comply with Meta’s brand guidelines (currently accessible at https://about.meta.com/brand/resources/meta/company-brand/\
\ ). All goodwill arising out of your use of the Mark will inure to the benefit\
\ of Meta.\nb. Subject to Meta’s ownership of Llama Materials and derivatives made\
\ by or for Meta, with respect to any derivative works and modifications of the\
\ Llama Materials that are made by you, as between you and Meta, you are and will\
\ be the owner of such derivative works and modifications.\nc. If you institute\
\ litigation or other proceedings against Meta or any entity (including a cross-claim\
\ or counterclaim in a lawsuit) alleging that the Llama Materials or Llama 3.1 outputs\
\ or results, or any portion of any of the foregoing, constitutes infringement of\
\ intellectual property or other rights owned or licensable by you, then any licenses\
\ granted to you under this Agreement shall terminate as of the date such litigation\
\ or claim is filed or instituted. You will indemnify and hold harmless Meta from\
\ and against any claim by any third party arising out of or related to your use\
\ or distribution of the Llama Materials.\n6. Term and Termination. The term of\
\ this Agreement will commence upon your acceptance of this Agreement or access\
\ to the Llama Materials and will continue in full force and effect until terminated\
\ in accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Llama 3.1 Acceptable Use Policy\n\
Meta is committed to promoting safe and fair use of its tools and features, including\
\ Llama 3.1. If you access or use Llama 3.1, you agree to this Acceptable Use Policy\
\ (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3_1/use-policy](https://llama.meta.com/llama3_1/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Llama 3.1 safely and responsibly.\
\ You agree you will not use, or allow others to use, Llama 3.1 to:\n 1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 3. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 4. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 5.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 6. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 7. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 8. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Llama 3.1 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Llama 3.1 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Llama 3.1 or outputs are human-generated\n\
\ 6. Generating or facilitating false online engagement, including fake reviews\
\ and other means of fake online engagement\n4. Fail to appropriately disclose to\
\ end users any known dangers of your AI system\nPlease report any violation of\
\ this Policy, software “bug,” or other problems that could lead to a violation\
\ of this Policy through one of the following means:\n * Reporting issues with\
\ the model: [https://github.com/meta-llama/llama-models/issues](https://github.com/meta-llama/llama-models/issues)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
Job title:
type: select
options:
- Student
- Research Graduate
- AI researcher
- AI developer/engineer
- Reporter
- Other
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
---
# mariosorri/Meta-Llama-3.1-8B-Instruct-Q4_0-GGUF
This model was converted to GGUF format from [`meta-llama/Meta-Llama-3.1-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo mariosorri/Meta-Llama-3.1-8B-Instruct-Q4_0-GGUF --hf-file meta-llama-3.1-8b-instruct-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo mariosorri/Meta-Llama-3.1-8B-Instruct-Q4_0-GGUF --hf-file meta-llama-3.1-8b-instruct-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo mariosorri/Meta-Llama-3.1-8B-Instruct-Q4_0-GGUF --hf-file meta-llama-3.1-8b-instruct-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo mariosorri/Meta-Llama-3.1-8B-Instruct-Q4_0-GGUF --hf-file meta-llama-3.1-8b-instruct-q4_0.gguf -c 2048
```
|
rg1683/fine_tuned_wordpiece_test_NamedEntityRecognition_large | rg1683 | 2024-09-16T11:59:49Z | 173 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| token-classification | 2024-09-14T18:37:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SzegedAI/babylm24_LSM_strict-small | SzegedAI | 2024-09-16T11:43:37Z | 2,551 | 0 | transformers | [
"transformers",
"safetensors",
"deberta",
"fill-mask",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2024-09-16T11:37:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Niggendar/junkjuice_silkysauce | Niggendar | 2024-09-16T11:40:12Z | 123 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-09-16T11:29:12Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
samganguly/bert-resume-classification-model | samganguly | 2024-09-16T11:39:29Z | 47 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-09-16T11:27:00Z | ---
library_name: transformers
tags:
- generated_from_keras_callback
model-index:
- name: bert-resume-classification-model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-resume-classification-model
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.44.2
- TensorFlow 2.15.0
- Tokenizers 0.19.1
|
DeltaIV/llama31-eros-merged_16bit | DeltaIV | 2024-09-16T11:38:18Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter",
"base_model:finetune:grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-09-16T11:33:15Z | ---
base_model: grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** DeltaIV
- **License:** apache-2.0
- **Finetuned from model :** grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SameedHussain/gemma-2-2b-it-Flight-Multi-Turn-V3-DPO | SameedHussain | 2024-09-16T11:33:46Z | 61 | 0 | transformers | [
"transformers",
"pytorch",
"gemma2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"dpo",
"conversational",
"en",
"base_model:unsloth/gemma-2-2b-it-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-2b-it-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-09-16T11:21:34Z | ---
base_model: unsloth/gemma-2-2b-it-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
- dpo
---
# Uploaded model
- **Developed by:** SameedHussain
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-2b-it-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| Step | Training Loss | Rewards / Chosen | Rewards / Rejected | Rewards / Accuracies | Rewards / Margins | Logps / Rejected | Logps / Chosen | Logits / Rejected | Logits / Chosen |
|------|---------------|------------------|--------------------|----------------------|-------------------|------------------|----------------|-------------------|-----------------|
| 100 | 0.454700 | 6.241566 | 3.175092 | 0.750000 | 3.066474 | -102.758446 | -53.181263 | -14.580903 | -14.938275 |
| 200 | 0.264100 | 6.640531 | 2.823826 | 0.888750 | 3.816705 | -110.525520 | -50.815018 | -14.796252 | -15.198202 |
| 300 | 0.110200 | 6.310797 | 1.718347 | 0.985000 | 4.592450 | -118.720840 | -48.524315 | -15.263680 | -15.698647 |
| 400 | 0.046900 | 6.744057 | 0.677384 | 0.997500 | 6.066672 | -128.757660 | -48.107479 | -15.710546 | -16.174524 |
| 500 | 0.019700 | 6.714230 | -0.529035 | 1.000000 | 7.243264 | -143.408020 | -49.327625 | -16.120342 | -16.611662 |
| 600 | 0.013700 | 6.605389 | -1.275738 | 1.000000 | 7.881127 | -146.968491 | -48.847641 | -16.320650 | -16.836390 |
| 700 | 0.007900 | 6.333577 | -2.010140 | 1.000000 | 8.343716 | -154.255066 | -50.590134 | -16.486574 | -16.987421 |
| 800 | 0.006300 | 6.489099 | -2.076626 | 1.000000 | 8.565723 | -150.381393 | -49.992256 | -16.614525 | -17.117744 |
| 900 | 0.005100 | 6.429256 | -2.340122 | 1.000000 | 8.769380 | -160.874405 | -51.164425 | -16.687891 | -17.165791 |
| 1000 | 0.004700 | 6.494193 | -2.520164 | 1.000000 | 9.014358 | -163.852982 | -54.317467 | -16.757954 | -17.206339 |
| 1100 | 0.005900 | 6.287598 | -2.524287 | 1.000000 | 8.811884 | -161.473770 | -52.012741 | -16.825716 | -17.266563 |
| 1200 | 0.005200 | 6.246828 | -3.126722 | 0.998750 | 9.373549 | -167.766861 | -52.052780 | -16.795412 | -17.277397 |
| 1300 | 0.004300 | 6.347938 | -2.930621 | 1.000000 | 9.278559 | -165.971939 | -50.738480 | -16.836918 | -17.304783 |
| 1400 | 0.003900 | 6.232501 | -3.073614 | 1.000000 | 9.306114 | -165.787643 | -50.953049 | -16.813383 | -17.290031 |
|
valstu/Ahma-3B-Instruct-GGUF | valstu | 2024-09-16T11:32:16Z | 8 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:Finnish-NLP/Ahma-3B-Instruct",
"base_model:quantized:Finnish-NLP/Ahma-3B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-09-16T11:30:11Z | ---
base_model: Finnish-NLP/Ahma-3B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** valstu
- **License:** apache-2.0
- **Finetuned from model :** Finnish-NLP/Ahma-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
leukas/DeBaby-halfcontr | leukas | 2024-09-16T11:23:36Z | 2,745 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2024-09-11T15:40:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/arco-75k-GGUF | mradermacher | 2024-09-16T11:22:35Z | 23 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"sft",
"en",
"base_model:appvoid/arco-75k",
"base_model:quantized:appvoid/arco-75k",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-09-16T11:18:32Z | ---
base_model: appvoid/arco-75k
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/appvoid/arco-75k
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.IQ3_XS.gguf) | IQ3_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.IQ3_S.gguf) | IQ3_S | 0.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.IQ3_M.gguf) | IQ3_M | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.Q3_K_L.gguf) | Q3_K_L | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.IQ4_XS.gguf) | IQ4_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.Q4_K_S.gguf) | Q4_K_S | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.Q6_K.gguf) | Q6_K | 0.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/arco-75k-GGUF/resolve/main/arco-75k.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/TinyLlama-Psychotherapist-GGUF | mradermacher | 2024-09-16T11:18:37Z | 13 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:kebbbnnn/TinyLlama-Psychotherapist",
"base_model:quantized:kebbbnnn/TinyLlama-Psychotherapist",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-09-16T11:08:53Z | ---
base_model: kebbbnnn/TinyLlama-Psychotherapist
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/kebbbnnn/TinyLlama-Psychotherapist
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Psychotherapist-GGUF/resolve/main/TinyLlama-Psychotherapist.Q2_K.gguf) | Q2_K | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Psychotherapist-GGUF/resolve/main/TinyLlama-Psychotherapist.IQ3_XS.gguf) | IQ3_XS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Psychotherapist-GGUF/resolve/main/TinyLlama-Psychotherapist.Q3_K_S.gguf) | Q3_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Psychotherapist-GGUF/resolve/main/TinyLlama-Psychotherapist.IQ3_S.gguf) | IQ3_S | 0.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Psychotherapist-GGUF/resolve/main/TinyLlama-Psychotherapist.IQ3_M.gguf) | IQ3_M | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Psychotherapist-GGUF/resolve/main/TinyLlama-Psychotherapist.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Psychotherapist-GGUF/resolve/main/TinyLlama-Psychotherapist.Q3_K_L.gguf) | Q3_K_L | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Psychotherapist-GGUF/resolve/main/TinyLlama-Psychotherapist.IQ4_XS.gguf) | IQ4_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Psychotherapist-GGUF/resolve/main/TinyLlama-Psychotherapist.Q4_K_S.gguf) | Q4_K_S | 0.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Psychotherapist-GGUF/resolve/main/TinyLlama-Psychotherapist.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Psychotherapist-GGUF/resolve/main/TinyLlama-Psychotherapist.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Psychotherapist-GGUF/resolve/main/TinyLlama-Psychotherapist.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Psychotherapist-GGUF/resolve/main/TinyLlama-Psychotherapist.Q6_K.gguf) | Q6_K | 1.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Psychotherapist-GGUF/resolve/main/TinyLlama-Psychotherapist.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Psychotherapist-GGUF/resolve/main/TinyLlama-Psychotherapist.f16.gguf) | f16 | 2.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
swp-berlin/deberta-base-news-topics-kenia-china | swp-berlin | 2024-09-16T11:11:40Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"doi:10.57967/hf/3077",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-09-12T16:14:42Z | ---
library_name: transformers
pipeline_tag: text-classification
---
# Model Card for Model ID
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** Paul Bochtler
- **Finetuned from model [optional]:** microsoft/deberta-v3-base
## Uses
The model `swp-berlin/deberta-base-news-topics-kenia-china` has been trained on about 700 articles from Kenyan newspapers to detect the presence of the following topics:
- **Chinese Domestic and Regional Politics**: Covers topics related to the CCP, Taiwan, and Xinjiang.
- **Coronavirus**: Includes topics related to the outbreak and vaccines.
- **Cultural Cooperation**: Topics covering cultural exchanges and partnerships.
- **Financing/Loans/Debt**: Covers financial aspects including loans and debt management.
- **Global Affairs/International (Geo)politics**: Topics related to international relations and geopolitical dynamics.
- **ICT**: Focuses on topics related to information and communication technologies.
- **Kenyan Foreign Policy/Diplomacy**: Focus on Kenya's foreign relations and diplomatic efforts.
- **Regional Affairs/African Politics**: Topics on regional dynamics and African political issues.
- **SGR Railway**: Specific topics related to the Standard Gauge Railway project.
- **Social Controversies**: Includes discussions on lawsuits, trade/business practices, and poaching.
- **Tourism**: Covers aspects related to the tourism industry.
- **Trade/Investment**: Includes import/export, tenders, and investment projects.
### Direct Use
This model can be directly applied to classify articles based on the above topics, making it suitable for use in media analysis, content categorization, and research on public discourse in Kenyan media.
## Bias, Risks, and Limitations
The model `swp-berlin/deberta-base-news-topics-kenia-china` was trained on approximately 700 articles from Kenyan newspapers, which may introduce certain biases and limitations:
- **Data Bias**: The model's predictions are influenced by the specific articles and sources used during training, which may reflect the perspectives, biases, and linguistic styles of those publications. This can result in an overrepresentation of certain viewpoints or underrepresentation of others, especially those outside the mainstream media.
- **Cultural and Regional Bias**: Since the training data is centered around Kenyan newspapers, the model may perform better on content related to East African contexts and may not generalize well to other regions or cultural settings.
- **Topic Limitations**: The model is designed to detect specific topics such as global affairs, development cooperation, and social controversies. It may not perform well on texts that fall outside these predefined categories.
- **Risks of Misclassification**: As with any classification model, there is a risk of misclassification, especially with nuanced or ambiguous content. Users should verify model outputs where high accuracy is critical.
- **Ethical Considerations**: Users should be mindful of how the model’s outputs are used, particularly in sensitive contexts such as media analysis or public discourse monitoring, to avoid reinforcing biases or misinformation.
## How to Get Started with the Model
To get started with the model, use the following code snippet:
```python
from transformers import pipeline, AutoModelForSequenceClassification, AutoTokenizer
import torch
# Define model path and device
model_name = "swp-berlin/deberta-base-news-topics-kenia-china"
device = "cuda:0" if torch.cuda.is_available() else "cpu" # Use GPU if available, otherwise CPU
# Load the model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True, model_max_length=512)
# Initialize the pipeline for text classification
pipe_classifier = pipeline(
"text-classification",
model=model,
tokenizer=tokenizer,
framework="pt",
device=device,
batch_size=2
)
# Example usage
result = pipe_classifier("Example text to classify")
print(result)
```
### Training Data
The model was trained on a custom dataset comprising approximately 700 articles from Kenyan newspapers. The dataset includes a variety of topics relevant to Kenyan and international contexts, including health, politics, development, and cultural affairs. Preprocessing involved filtering irrelevant articles and balancing the dataset across the target topics.
### Training Procedure
The model was fine-tuned on a pre-trained `DeBERTa-base` model using the following training configuration:
#### Preprocessing
- Texts were tokenized using the `DeBERTa` tokenizer, with special attention given to splitting sentences and removing noise such as URLs and non-text elements.
#### Training Hyperparameters
- **Hyperparameters**:
- **Learning Rate**: `6e-5`
- **Batch Size**: `8`
- **Epochs**: `20`
- **Gradient Accumulation Steps**: `4`
- **Warm-up Ratio**: `0.06` to gradually ramp up the learning rate at the start of training
- **Weight Decay**: `0.01` to regularize the model and prevent overfitting
- **Evaluation Strategy**: Evaluation was performed at the end of each epoch, with the best model based on `f1_macro` score retained.
Training was conducted on a GPU environment to optimize performance and speed. The training script utilized Hugging Face's `Trainer` class for efficient model management and evaluation.
|
mradermacher/TinyLlama-Friendly-Psychotherapist-GGUF | mradermacher | 2024-09-16T11:02:06Z | 108 | 1 | transformers | [
"transformers",
"gguf",
"en",
"dataset:Amod/mental_health_counseling_conversations",
"dataset:mpingale/mental-health-chat-dataset",
"dataset:heliosbrahma/mental_health_chatbot_dataset",
"dataset:SAGI-1/Greetings_DPO_dataset_V1",
"base_model:kebbbnnn/TinyLlama-Friendly-Psychotherapist",
"base_model:quantized:kebbbnnn/TinyLlama-Friendly-Psychotherapist",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-09-16T10:52:51Z | ---
base_model: kebbbnnn/TinyLlama-Friendly-Psychotherapist
datasets:
- Amod/mental_health_counseling_conversations
- mpingale/mental-health-chat-dataset
- heliosbrahma/mental_health_chatbot_dataset
- SAGI-1/Greetings_DPO_dataset_V1
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/kebbbnnn/TinyLlama-Friendly-Psychotherapist
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Friendly-Psychotherapist-GGUF/resolve/main/TinyLlama-Friendly-Psychotherapist.Q2_K.gguf) | Q2_K | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Friendly-Psychotherapist-GGUF/resolve/main/TinyLlama-Friendly-Psychotherapist.IQ3_XS.gguf) | IQ3_XS | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Friendly-Psychotherapist-GGUF/resolve/main/TinyLlama-Friendly-Psychotherapist.Q3_K_S.gguf) | Q3_K_S | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Friendly-Psychotherapist-GGUF/resolve/main/TinyLlama-Friendly-Psychotherapist.IQ3_S.gguf) | IQ3_S | 0.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Friendly-Psychotherapist-GGUF/resolve/main/TinyLlama-Friendly-Psychotherapist.IQ3_M.gguf) | IQ3_M | 0.6 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Friendly-Psychotherapist-GGUF/resolve/main/TinyLlama-Friendly-Psychotherapist.Q3_K_M.gguf) | Q3_K_M | 0.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Friendly-Psychotherapist-GGUF/resolve/main/TinyLlama-Friendly-Psychotherapist.Q3_K_L.gguf) | Q3_K_L | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Friendly-Psychotherapist-GGUF/resolve/main/TinyLlama-Friendly-Psychotherapist.IQ4_XS.gguf) | IQ4_XS | 0.7 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Friendly-Psychotherapist-GGUF/resolve/main/TinyLlama-Friendly-Psychotherapist.Q4_K_S.gguf) | Q4_K_S | 0.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Friendly-Psychotherapist-GGUF/resolve/main/TinyLlama-Friendly-Psychotherapist.Q4_K_M.gguf) | Q4_K_M | 0.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Friendly-Psychotherapist-GGUF/resolve/main/TinyLlama-Friendly-Psychotherapist.Q5_K_S.gguf) | Q5_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Friendly-Psychotherapist-GGUF/resolve/main/TinyLlama-Friendly-Psychotherapist.Q5_K_M.gguf) | Q5_K_M | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Friendly-Psychotherapist-GGUF/resolve/main/TinyLlama-Friendly-Psychotherapist.Q6_K.gguf) | Q6_K | 1.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Friendly-Psychotherapist-GGUF/resolve/main/TinyLlama-Friendly-Psychotherapist.Q8_0.gguf) | Q8_0 | 1.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/TinyLlama-Friendly-Psychotherapist-GGUF/resolve/main/TinyLlama-Friendly-Psychotherapist.f16.gguf) | f16 | 2.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF | mradermacher | 2024-09-16T10:51:31Z | 24 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:NobleMathews/SmolLM-135M-GrammarSFT-DPO",
"base_model:quantized:NobleMathews/SmolLM-135M-GrammarSFT-DPO",
"endpoints_compatible",
"region:us"
]
| null | 2024-09-16T10:49:46Z | ---
base_model: NobleMathews/SmolLM-135M-GrammarSFT-DPO
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/NobleMathews/SmolLM-135M-GrammarSFT-DPO
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.IQ3_S.gguf) | IQ3_S | 0.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.IQ3_XS.gguf) | IQ3_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.IQ3_M.gguf) | IQ3_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SmolLM-135M-GrammarSFT-DPO-GGUF/resolve/main/SmolLM-135M-GrammarSFT-DPO.f16.gguf) | f16 | 0.4 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
TopperThijs/merge_test | TopperThijs | 2024-09-16T10:44:09Z | 35 | 0 | transformers | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-09-04T08:54:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NAITIVEAIAGENCY/NAITIVRLORA | NAITIVEAIAGENCY | 2024-09-16T10:41:02Z | 5 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2024-09-15T22:10:48Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: NAITIVECHRIS
---
# Naitivrlora
<!-- <Gallery /> -->
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `NAITIVECHRIS` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Skalinator/NAITIVRLORA', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Hajarhammadi/gpt2_finetuned | Hajarhammadi | 2024-09-16T10:40:23Z | 174 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-08-20T13:06:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-gguf | RichardErkhov | 2024-09-16T10:35:48Z | 28 | 0 | null | [
"gguf",
"arxiv:2309.16609",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-09-16T02:50:16Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen1.5-7B-Dutch-Chat - GGUF
- Model creator: https://huggingface.co/robinsmits/
- Original model: https://huggingface.co/robinsmits/Qwen1.5-7B-Dutch-Chat/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen1.5-7B-Dutch-Chat.Q2_K.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-gguf/blob/main/Qwen1.5-7B-Dutch-Chat.Q2_K.gguf) | Q2_K | 2.89GB |
| [Qwen1.5-7B-Dutch-Chat.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-gguf/blob/main/Qwen1.5-7B-Dutch-Chat.IQ3_XS.gguf) | IQ3_XS | 3.18GB |
| [Qwen1.5-7B-Dutch-Chat.IQ3_S.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-gguf/blob/main/Qwen1.5-7B-Dutch-Chat.IQ3_S.gguf) | IQ3_S | 3.32GB |
| [Qwen1.5-7B-Dutch-Chat.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-gguf/blob/main/Qwen1.5-7B-Dutch-Chat.Q3_K_S.gguf) | Q3_K_S | 3.32GB |
| [Qwen1.5-7B-Dutch-Chat.IQ3_M.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-gguf/blob/main/Qwen1.5-7B-Dutch-Chat.IQ3_M.gguf) | IQ3_M | 3.48GB |
| [Qwen1.5-7B-Dutch-Chat.Q3_K.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-gguf/blob/main/Qwen1.5-7B-Dutch-Chat.Q3_K.gguf) | Q3_K | 3.65GB |
| [Qwen1.5-7B-Dutch-Chat.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-gguf/blob/main/Qwen1.5-7B-Dutch-Chat.Q3_K_M.gguf) | Q3_K_M | 3.65GB |
| [Qwen1.5-7B-Dutch-Chat.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-gguf/blob/main/Qwen1.5-7B-Dutch-Chat.Q3_K_L.gguf) | Q3_K_L | 3.93GB |
| [Qwen1.5-7B-Dutch-Chat.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-gguf/blob/main/Qwen1.5-7B-Dutch-Chat.IQ4_XS.gguf) | IQ4_XS | 4.02GB |
| [Qwen1.5-7B-Dutch-Chat.Q4_0.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-gguf/blob/main/Qwen1.5-7B-Dutch-Chat.Q4_0.gguf) | Q4_0 | 4.2GB |
| [Qwen1.5-7B-Dutch-Chat.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-gguf/blob/main/Qwen1.5-7B-Dutch-Chat.IQ4_NL.gguf) | IQ4_NL | 4.22GB |
| [Qwen1.5-7B-Dutch-Chat.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-gguf/blob/main/Qwen1.5-7B-Dutch-Chat.Q4_K_S.gguf) | Q4_K_S | 4.23GB |
| [Qwen1.5-7B-Dutch-Chat.Q4_K.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-gguf/blob/main/Qwen1.5-7B-Dutch-Chat.Q4_K.gguf) | Q4_K | 4.44GB |
| [Qwen1.5-7B-Dutch-Chat.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-gguf/blob/main/Qwen1.5-7B-Dutch-Chat.Q4_K_M.gguf) | Q4_K_M | 4.44GB |
| [Qwen1.5-7B-Dutch-Chat.Q4_1.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-gguf/blob/main/Qwen1.5-7B-Dutch-Chat.Q4_1.gguf) | Q4_1 | 4.61GB |
| [Qwen1.5-7B-Dutch-Chat.Q5_0.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-gguf/blob/main/Qwen1.5-7B-Dutch-Chat.Q5_0.gguf) | Q5_0 | 5.03GB |
| [Qwen1.5-7B-Dutch-Chat.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-gguf/blob/main/Qwen1.5-7B-Dutch-Chat.Q5_K_S.gguf) | Q5_K_S | 5.03GB |
| [Qwen1.5-7B-Dutch-Chat.Q5_K.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-gguf/blob/main/Qwen1.5-7B-Dutch-Chat.Q5_K.gguf) | Q5_K | 5.15GB |
| [Qwen1.5-7B-Dutch-Chat.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-gguf/blob/main/Qwen1.5-7B-Dutch-Chat.Q5_K_M.gguf) | Q5_K_M | 5.15GB |
| [Qwen1.5-7B-Dutch-Chat.Q5_1.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-gguf/blob/main/Qwen1.5-7B-Dutch-Chat.Q5_1.gguf) | Q5_1 | 5.44GB |
| [Qwen1.5-7B-Dutch-Chat.Q6_K.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-gguf/blob/main/Qwen1.5-7B-Dutch-Chat.Q6_K.gguf) | Q6_K | 5.9GB |
| [Qwen1.5-7B-Dutch-Chat.Q8_0.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-gguf/blob/main/Qwen1.5-7B-Dutch-Chat.Q8_0.gguf) | Q8_0 | 7.65GB |
Original model description:
---
language:
- nl
license: cc-by-nc-4.0
library_name: transformers
tags:
- trl
- dpo
- conversational
datasets:
- BramVanroy/ultra_feedback_dutch_cleaned
pipeline_tag: text-generation
inference: false
model-index:
- name: Qwen1.5-7B-Dutch-Chat
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 53.92
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=robinsmits/Qwen1.5-7B-Dutch-Chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 76.03
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=robinsmits/Qwen1.5-7B-Dutch-Chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.38
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=robinsmits/Qwen1.5-7B-Dutch-Chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 45.34
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=robinsmits/Qwen1.5-7B-Dutch-Chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.82
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=robinsmits/Qwen1.5-7B-Dutch-Chat
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 15.47
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=robinsmits/Qwen1.5-7B-Dutch-Chat
name: Open LLM Leaderboard
---
# Qwen1.5-7B-Dutch-Chat
## Model description
This DPO aligned model is the merged version of the adapter model [robinsmits/Qwen1.5-7B-Dutch-Chat-Dpo](https://huggingface.co/robinsmits/Qwen1.5-7B-Dutch-Chat-Dpo).
DPO Finetuning was performed on the Dutch [BramVanroy/ultra_feedback_dutch_cleaned](https://huggingface.co/datasets/BramVanroy/ultra_feedback_dutch_cleaned) dataset.
See [Qwen/Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat) for all information about the base model.
## ScandEval Dutch Leaderboard Evaluation Results
For evaluation results based on the Dutch language you can take a look at the site of ScandEval.
This model achieves a score which is very close to the performance of GPT-3.5.
[Dutch Natural Language Understanding](https://scandeval.com/dutch-nlu/)
[Dutch Natural Language Generation](https://scandeval.com/dutch-nlg/)
## [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_robinsmits__Qwen1.5-7B-Dutch-Chat)
Note that these Evaluation Results are for the English language.
| Metric |Value|
|---------------------------------|----:|
|Avg. |53.66|
|AI2 Reasoning Challenge (25-Shot)|53.92|
|HellaSwag (10-Shot) |76.03|
|MMLU (5-Shot) |62.38|
|TruthfulQA (0-shot) |45.34|
|Winogrande (5-shot) |68.82|
|GSM8k (5-shot) |15.47|
## Model usage
A basic example of how to use the finetuned model.
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = 'cuda'
model_name = 'robinsmits/Qwen1.5-7B-Dutch-Chat'
model = AutoModelForCausalLM.from_pretrained(model_name,
device_map = "auto",
torch_dtype = torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(model_name)
messages = [{"role": "user", "content": "Hoi hoe gaat het ermee? Wat kun je me vertellen over appels?"}]
encoded_ids = tokenizer.apply_chat_template(messages,
add_generation_prompt = True,
return_tensors = "pt")
generated_ids = model.generate(input_ids = encoded_ids.to(device),
max_new_tokens = 256,
do_sample = True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
Below the chat template with the generated output.
```
<|im_start|>system
Je bent een behulpzame AI assistent<|im_end|>
<|im_start|>user
Hoi hoe gaat het ermee? Wat kun je me vertellen over appels?<|im_end|>
<|im_start|>assistant
Hallo! Appels zijn zo'n lekkere fruitsoort. Ze zijn zoet en knapperig, en je kunt ze koken, roosteren of zelfs in smoothies doen. Er zijn heel veel verschillende soorten appels, zoals de Fuji, Granny Smith en Gala. De appels die je meestal in de winkel koopt, komen van bomen die in het oosten van Noord-Amerika groeien.<|im_end|>
```
## Intended uses & limitations
As with all LLM's this model can also experience bias and hallucinations. Regardless of how you use this model always perform the necessary testing and validation.
The used dataset does not allow commercial usage.
## Training and evaluation data
The training notebook is available at the following link: [Qwen1_5_7B_Dutch_Chat_DPO](https://github.com/RobinSmits/Dutch-LLMs/blob/main/Qwen1_5_7B_Dutch_Chat_DPO.ipynb)
Training was performed with Google Colab PRO on a A100 - 40GB and lasted around 4 hours.
It achieves the following results on the evaluation set:
- Loss: 0.2610
- Rewards/chosen: -0.7248
- Rewards/rejected: -2.6224
- Rewards/accuracies: 0.9170
- Rewards/margins: 1.8976
- Logps/rejected: -877.8102
- Logps/chosen: -783.4282
- Logits/rejected: -0.8110
- Logits/chosen: -0.7528
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.5503 | 0.1 | 30 | 0.4684 | -0.0439 | -0.6295 | 0.8919 | 0.5856 | -837.9513 | -769.8103 | -0.9335 | -0.8894 |
| 0.4178 | 0.2 | 60 | 0.3568 | -0.3713 | -1.4769 | 0.9015 | 1.1056 | -854.9000 | -776.3594 | -0.8768 | -0.8276 |
| 0.3264 | 0.29 | 90 | 0.3143 | -0.4893 | -1.8730 | 0.9151 | 1.3837 | -862.8228 | -778.7191 | -0.8428 | -0.7929 |
| 0.2999 | 0.39 | 120 | 0.2885 | -0.6832 | -2.3118 | 0.9151 | 1.6286 | -871.5981 | -782.5971 | -0.8260 | -0.7730 |
| 0.3454 | 0.49 | 150 | 0.2749 | -0.7239 | -2.4904 | 0.9189 | 1.7664 | -875.1693 | -783.4113 | -0.8235 | -0.7678 |
| 0.3354 | 0.59 | 180 | 0.2685 | -0.6775 | -2.4859 | 0.9170 | 1.8084 | -875.0795 | -782.4824 | -0.8130 | -0.7574 |
| 0.2848 | 0.68 | 210 | 0.2652 | -0.7157 | -2.5692 | 0.9131 | 1.8535 | -876.7465 | -783.2466 | -0.8157 | -0.7586 |
| 0.3437 | 0.78 | 240 | 0.2621 | -0.7233 | -2.6091 | 0.9151 | 1.8857 | -877.5430 | -783.3994 | -0.8138 | -0.7561 |
| 0.2655 | 0.88 | 270 | 0.2611 | -0.7183 | -2.6154 | 0.9151 | 1.8971 | -877.6708 | -783.2995 | -0.8106 | -0.7524 |
| 0.3442 | 0.98 | 300 | 0.2610 | -0.7248 | -2.6224 | 0.9170 | 1.8976 | -877.8102 | -783.4282 | -0.8110 | -0.7528 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
## Citation
Thanks to the creators of Qwen1.5 for their great work!
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
```
|
Xu-Ouyang/pythia-2.8b-deduped-int2-step100000-GPTQ-wikitext2-uva | Xu-Ouyang | 2024-09-16T10:31:43Z | 60 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
]
| text-generation | 2024-09-16T10:29:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jfunk14/model4 | jfunk14 | 2024-09-16T10:23:39Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-09-13T15:15:00Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: model4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model4
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6631
- Accuracy: 0.5729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1
- Datasets 3.0.0
- Tokenizers 0.19.1
|
Ba2han/Pretraining_Turkish-test-5500s | Ba2han | 2024-09-16T10:23:11Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"tr",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-09-16T08:39:23Z | ---
license: mit
language:
- tr
library_name: transformers
---
Pretrained on 1B (mostly Turkish) tokens from HF and "high quality" scraped data using 1 RTX 3090. The training will continue. The model already can produce sensible sentences in Turkish.
___________________________
HF kaynaklı ve scrape edilen 1 Milyar (çoğunlukla Türkçe) token ile 1 RTX 3090 kullanılarak eğitilmiştir. Eğitim devam edecek. Model şimdiden düzgün Türkçe cümleler kurabiliyor:

(top_k=24, repetition_penalty=1.1, temperature=0.12, seed=1022) |
xinsir/controlnet-tile-sdxl-1.0 | xinsir | 2024-09-16T10:14:19Z | 25,250 | 213 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:apache-2.0",
"region:us"
]
| text-to-image | 2024-06-26T15:27:48Z | ---
license: apache-2.0
pipeline_tag: text-to-image
---
# ***ControlNet Tile SDXL***

# Image Deblur Example(Repaint Detail)





# Image Variation Example(like midjourney)





# Image Super-resolution(like realESRGAN)
support any aspect ratio and any times upscale, followings are 3 * 3 times




# Code to Use Tile blur
code reference: https://huggingface.co/TTPlanet/TTPLanet_SDXL_Controlnet_Tile_Realistic/blob/main/TTP_tile_preprocessor_v5.py
https://github.com/lllyasviel/ControlNet-v1-1-nightly/blob/main/gradio_tile.py
```python
from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL
from diffusers import DDIMScheduler, EulerAncestralDiscreteScheduler
from PIL import Image
from guided_filter import FastGuidedFilter # I have upload this file in this repo
import torch
import numpy as np
import cv2
def resize_image_control(control_image, resolution):
HH, WW, _ = control_image.shape
crop_h = random.randint(0, HH - resolution[1])
crop_w = random.randint(0, WW - resolution[0])
crop_image = control_image[crop_h:crop_h+resolution[1], crop_w:crop_w+resolution[0], :]
return crop_image, crop_w, crop_h
def apply_gaussian_blur(image_np, ksize=5, sigmaX=1.0):
if ksize % 2 == 0:
ksize += 1 # ksize must be odd
blurred_image = cv2.GaussianBlur(image_np, (ksize, ksize), sigmaX=sigmaX)
return blurred_image
def apply_guided_filter(image_np, radius, eps, scale):
filter = FastGuidedFilter(image_np, radius, eps, scale)
return filter.filter(image_np)
controlnet_conditioning_scale = 1.0
prompt = "your prompt, the longer the better, you can describe it as detail as possible"
negative_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
eulera_scheduler = EulerAncestralDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", subfolder="scheduler")
controlnet = ControlNetModel.from_pretrained(
"xinsir/controlnet-tile-sdxl-1.0",
torch_dtype=torch.float16
)
# when test with other base model, you need to change the vae also.
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
controlnet=controlnet,
vae=vae,
safety_checker=None,
torch_dtype=torch.float16,
scheduler=eulera_scheduler,
)
controlnet_img = cv2.imread("your original image path")
height, width, _ = controlnet_img.shape
ratio = np.sqrt(1024. * 1024. / (width * height))
W, H = int(width * ratio), int(height * ratio)
crop_w, crop_h = 0, 0
controlnet_img = cv2.resize(controlnet_img, (W, H))
blur_strength = random.sample([i / 10. for i in range(10, 201, 2)], k=1)[0]
radius = random.sample([i for i in range(1, 40, 2)], k=1)[0]
eps = random.sample([i / 1000. for i in range(1, 101, 2)], k=1)[0]
scale_factor = random.sample([i / 10. for i in range(10, 181, 5)], k=1)[0]
if random.random() > 0.5:
controlnet_img = apply_gaussian_blur(controlnet_img, ksize=int(blur_strength), sigmaX=blur_strength / 2)
if random.random() > 0.5:
# Apply Guided Filter
controlnet_img = apply_guided_filter(controlnet_img, radius, eps, scale_factor)
# Resize image
controlnet_img = cv2.resize(controlnet_img, (int(W / scale_factor), int(H / scale_factor)), interpolation=cv2.INTER_AREA)
controlnet_img = cv2.resize(controlnet_img, (W, H), interpolation=cv2.INTER_CUBIC)
controlnet_img = cv2.cvtColor(controlnet_img, cv2.COLOR_BGR2RGB)
controlnet_img = Image.fromarray(controlnet_img)
# need to resize the image resolution to 1024 * 1024 or same bucket resolution to get the best performance
images = pipe(
prompt,
negative_prompt=negative_prompt,
image=controlnet_img,
controlnet_conditioning_scale=controlnet_conditioning_scale,
width=new_width,
height=new_height,
num_inference_steps=30,
).images
images[0].save(f"your image save path, png format is usually better than jpg or webp in terms of image quality but got much bigger")
```
# Code to Use Tile var
Use more detail prompt to regerate can help!
```python
from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL
from diffusers import DDIMScheduler, EulerAncestralDiscreteScheduler
from PIL import Image
import torch
import numpy as np
import cv2
controlnet_conditioning_scale = 1.0
prompt = "your prompt, the longer the better, you can describe it as detail as possible"
negative_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
eulera_scheduler = EulerAncestralDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", subfolder="scheduler")
controlnet = ControlNetModel.from_pretrained(
"xinsir/controlnet-tile-sdxl-1.0",
torch_dtype=torch.float16
)
# when test with other base model, you need to change the vae also.
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
controlnet=controlnet,
vae=vae,
safety_checker=None,
torch_dtype=torch.float16,
scheduler=eulera_scheduler,
)
controlnet_img = cv2.imread("your original image path")
height, width, _ = controlnet_img.shape
ratio = np.sqrt(1024. * 1024. / (width * height))
W, H = int(width * ratio), int(height * ratio)
crop_w, crop_h = 0, 0
controlnet_img = cv2.resize(controlnet_img, (W, H))
controlnet_img = cv2.cvtColor(controlnet_img, cv2.COLOR_BGR2RGB)
controlnet_img = Image.fromarray(controlnet_img)
# need to resize the image resolution to 1024 * 1024 or same bucket resolution to get the best performance
images = pipe(
prompt,
negative_prompt=negative_prompt,
image=controlnet_img,
controlnet_conditioning_scale=controlnet_conditioning_scale,
width=new_width,
height=new_height,
num_inference_steps=30,
).images
images[0].save(f"your image save path, png format is usually better than jpg or webp in terms of image quality but got much bigger")
```
# Code to Use Tile super
performance may unstable and next version is optimizing!
```python
from diffusers import ControlNetModel, StableDiffusionXLControlNetPipeline, AutoencoderKL
from diffusers import DDIMScheduler, EulerAncestralDiscreteScheduler
from PIL import Image
import torch
import numpy as np
import cv2
controlnet_conditioning_scale = 1.0
prompt = "your prompt, the longer the better, you can describe it as detail as possible"
negative_prompt = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality'
eulera_scheduler = EulerAncestralDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", subfolder="scheduler")
controlnet = ControlNetModel.from_pretrained(
"xinsir/controlnet-tile-sdxl-1.0",
torch_dtype=torch.float16
)
# when test with other base model, you need to change the vae also.
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained(
"stabilityai/stable-diffusion-xl-base-1.0",
controlnet=controlnet,
vae=vae,
safety_checker=None,
torch_dtype=torch.float16,
scheduler=eulera_scheduler,
)
controlnet_img = cv2.imread("your original image path")
height, width, _ = controlnet_img.shape
ratio = np.sqrt(1024. * 1024. / (width * height))
W, H = int(width * ratio) // 48 * 48, int(height * ratio) // 48 * 48
controlnet_img = cv2.resize(controlnet_img, (W, H))
controlnet_img = cv2.cvtColor(controlnet_img, cv2.COLOR_BGR2RGB)
controlnet_img = Image.fromarray(controlnet_img)
# need to resize the image resolution to 1024 * 1024 or same bucket resolution to get the best performance
target_width = W // 3
target_height = H // 3
for i in range(3): # 两行
for j in range(3): # 两列
left = j * target_width
top = i * target_height
right = left + target_width
bottom = top + target_height
# 根据计算的边界裁剪图像
cropped_image = controlnet_img.crop((left, top, right, bottom))
cropped_image = cropped_image.resize((W, H))
images.append(cropped_image)
seed = random.randint(0, 2147483647)
generator = torch.Generator('cuda').manual_seed(seed)
result_images = []
for sub_img in images:
new_width, new_height = W, H
out = pipe(prompt=[prompt]*1,
image=sub_img,
control_image=sub_img,
negative_prompt=[negative_prompt]*1,
generator=generator,
width=new_width,
height=new_height,
num_inference_steps=30,
crops_coords_top_left=(W, H),
target_size=(W, H),
original_size=(W * 2, H * 2),
)
result_images.append(out.images[0])
new_im = Image.new('RGB', (new_width*3, new_height*3))
# 拼接图片到新的图像上
new_im.paste(result_images[0], (0, 0))
new_im.paste(result_images[1], (new_width, 0))
new_im.paste(result_images[2], (new_width * 2, 0))
new_im.paste(result_images[3], (0, new_height))
new_im.paste(result_images[4], (new_width, new_height))
new_im.paste(result_images[5], (new_width * 2, new_height))
new_im.paste(result_images[6], (0, new_height * 2))
new_im.paste(result_images[7], (new_width, new_height * 2))
new_im.paste(result_images[8], (new_width * 2, new_height * 2))
new_im.save(f"your image save path, png format is usually better than jpg or webp in terms of image quality but got much bigger")
``` |
MICS-Lab/novae-brain-0 | MICS-Lab | 2024-09-16T10:13:48Z | 23 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
]
| null | 2024-09-10T11:57:18Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
latent-space-dreams/LS_Vividus_XL | latent-space-dreams | 2024-09-16T10:13:05Z | 0 | 0 | null | [
"license:other",
"region:us"
]
| null | 2023-12-08T10:27:42Z | ---
{license: other,
license_name: faipl-1.0-sd,
license_link: https://freedevproject.org/faipl-1.0-sd/}
---
An anime style general purpose merge.
License: [Fair AI Public License 1.0-SD](https://freedevproject.org/faipl-1.0-sd/) with the following addition: No commercial use
Note: The CosXL version has the Stability AI Non-Commercial Research Community License |
MICS-Lab/novae-human-0 | MICS-Lab | 2024-09-16T10:05:02Z | 148 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"dataset:MICS-Lab/novae",
"region:us"
]
| null | 2024-09-10T11:45:16Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
datasets:
- MICS-Lab/novae
---
Default [Novae](https://github.com/MICS-Lab/novae) model for human tissues. |
mihofer/poca-SoccerTwos | mihofer | 2024-09-16T10:00:58Z | 34 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
]
| reinforcement-learning | 2024-09-16T09:57:38Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: mihofer/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Aratako/calm3-22b-RP-v2-GGUF | Aratako | 2024-09-16T09:55:09Z | 987 | 4 | null | [
"gguf",
"roleplay",
"ja",
"base_model:Aratako/calm3-22b-RP-v2",
"base_model:quantized:Aratako/calm3-22b-RP-v2",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-09-16T04:30:57Z | ---
license: cc-by-nc-sa-4.0
language:
- ja
tags:
- roleplay
base_model:
- Aratako/calm3-22b-RP-v2
---
# calm3-22b-RP-v2-GGUF
## 概要
[Aratako/calm3-22b-RP-v2](https://huggingface.co/Aratako/calm3-22b-RP-v2)の量子化済みGGUF版です。詳細は元モデルをご確認ください。
## ライセンス
本モデルの学習データにはOpenAI社のGPT-4o-miniの出力やAnthropic社のClaude 3.5 Sonnetの出力が含まれるため、これらの競合となるサービスには本モデルは利用できません。
そのため、本モデルは[CC-BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.ja)の元配布します。 |
kkasiviswanath/bart_large_qna_dataset_2_v1 | kkasiviswanath | 2024-09-16T09:52:20Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-09-16T09:49:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Xu-Ouyang/pythia-2.8b-deduped-int2-step95000-GPTQ-wikitext2-uva | Xu-Ouyang | 2024-09-16T09:43:55Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
]
| text-generation | 2024-09-16T09:41:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ahmednoor00/distilbert-stock-tweet-sentiment-analysis | ahmednoor00 | 2024-09-16T09:43:22Z | 174 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-09-16T09:39:22Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-stock-tweet-sentiment-analysis
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-stock-tweet-sentiment-analysis
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6088
- Accuracy: 0.7758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6873 | 1.0 | 1000 | 0.5699 | 0.7678 |
| 0.4813 | 2.0 | 2000 | 0.5673 | 0.7732 |
| 0.3664 | 3.0 | 3000 | 0.6088 | 0.7758 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
Akshath123/DamageCarModel | Akshath123 | 2024-09-16T09:41:28Z | 89 | 0 | transformers | [
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
]
| text-generation | 2024-09-16T09:22:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AmeerH/FPT_774M_FineTune_Flights_v3 | AmeerH | 2024-09-16T09:36:10Z | 59 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-09-15T19:38:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AlCyede/berry-and-berrylike-fruit-classification | AlCyede | 2024-09-16T09:36:01Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2024-09-16T09:35:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Moriacrafter/Qwen1.5-4B-4bit_DepressionDetection_Reddit | Moriacrafter | 2024-09-16T09:33:28Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-09-16T09:29:21Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
fuscusx/flux_bruno | fuscusx | 2024-09-16T09:18:47Z | 11 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2024-09-16T08:39:11Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: brunolinkedin
---
# Flux_Bruno
<!-- <Gallery /> -->
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `brunolinkedin` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('fuscusx/flux_bruno', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
DeltaChocolate/MP_TAPEX_finetuned_old | DeltaChocolate | 2024-09-16T09:17:10Z | 32 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"table-question-answering",
"base_model:microsoft/tapex-base",
"base_model:finetune:microsoft/tapex-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| table-question-answering | 2024-09-15T01:35:03Z | ---
library_name: transformers
license: mit
base_model: microsoft/tapex-base
tags:
- generated_from_trainer
- table-question-answering
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [microsoft/tapex-base](https://huggingface.co/microsoft/tapex-base) on a private dataset.
It achieves the following results on the evaluation set:
- Loss: 4.2378
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.031524797816536e-05
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 443
- num_epochs: 8
- mixed_precision_training: Native AMP
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1 |
LlamaFinetuneBase/Gemma-2-2B | LlamaFinetuneBase | 2024-09-16T08:59:59Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:2110.08193",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:1804.06876",
"arxiv:2103.03874",
"arxiv:2304.06364",
"arxiv:1903.00161",
"arxiv:2206.04615",
"arxiv:2203.09509",
"arxiv:2403.13793",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-09-16T08:00:24Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
---
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/base)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma2]
**Terms of Use**: [Terms][terms]
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights for both pre-trained variants and instruction-tuned variants.
Gemma models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First, install the Transformers library with:
```sh
pip install -U transformers
```
Then, copy the snippet from the section that is relevant for your usecase.
#### Running with the `pipeline` API
```python
import torch
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="google/gemma-2-2b",
device="cuda", # replace with "mps" to run on a Mac device
)
text = "Once upon a time,"
outputs = pipe(text, max_new_tokens=256)
response = outputs[0]["generated_text"]
print(response)
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b",
device_map="auto",
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
#### Running the model through a CLI
The [local-gemma](https://github.com/huggingface/local-gemma) repository contains a lightweight wrapper around Transformers
for running Gemma 2 through a command line interface, or CLI. Follow the [installation instructions](https://github.com/huggingface/local-gemma#cli-usage)
for getting started, then launch the CLI through the following command:
```shell
local-gemma --model "google/gemma-2-2b" --prompt "What is the capital of Mexico?"
```
#### Quantized Versions through `bitsandbytes`
<details>
<summary>
Using 8-bit precision (int8)
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
<details>
<summary>
Using 4-bit precision
</summary>
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b")
model = AutoModelForCausalLM.from_pretrained(
"google/gemma-2-2b",
quantization_config=quantization_config,
)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids, max_new_tokens=32)
print(tokenizer.decode(outputs[0]))
```
</details>
#### Advanced Usage
<details>
<summary>
Torch compile
</summary>
[Torch compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) is a method for speeding-up the
inference of PyTorch modules. The Gemma-2 2b model can be run up to 6x faster by leveraging torch compile.
Note that two warm-up steps are required before the full inference speed is realised:
```python
import os
os.environ["TOKENIZERS_PARALLELISM"] = "false"
from transformers import AutoTokenizer, Gemma2ForCausalLM
from transformers.cache_utils import HybridCache
import torch
torch.set_float32_matmul_precision("high")
# load the model + tokenizer
tokenizer = AutoTokenizer.from_pretrained("google/gemma-2-2b")
model = Gemma2ForCausalLM.from_pretrained("google/gemma-2-2b", torch_dtype=torch.bfloat16)
model.to("cuda")
# apply the torch compile transformation
model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
# pre-process inputs
input_text = "The theory of special relativity states "
model_inputs = tokenizer(input_text, return_tensors="pt").to("cuda")
prompt_length = model_inputs.input_ids.shape[1]
# set-up k/v cache
past_key_values = HybridCache(
config=model.config,
max_batch_size=1,
max_cache_len=model.config.max_position_embeddings,
device=model.device,
dtype=model.dtype
)
# enable passing kv cache to generate
model._supports_cache_class = True
model.generation_config.cache_implementation = None
# two warm-up steps
for idx in range(2):
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
past_key_values.reset()
# fast run
outputs = model.generate(**model_inputs, past_key_values=past_key_values, do_sample=True, temperature=1.0, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
For more details, refer to the [Transformers documentation](https://huggingface.co/docs/transformers/main/en/llm_optims?static-kv=basic+usage%3A+generation_config).
</details>
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
### Citation
```none
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources. The 27B model was trained with 13 trillion tokens, the 9B model was
trained with 8 trillion tokens, and 2B model was trained with 2 trillion tokens.
Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content.
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safety in line with
[our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)][tpu] hardware (TPUv5p).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models][foundation-models], including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma 2 PT 2B | Gemma 2 PT 9B | Gemma 2 PT 27B |
| ------------------------------ | ------------- | ------------- | ------------- | -------------- |
| [MMLU][mmlu] | 5-shot, top-1 | 51.3 | 71.3 | 75.2 |
| [HellaSwag][hellaswag] | 10-shot | 73.0 | 81.9 | 86.4 |
| [PIQA][piqa] | 0-shot | 77.8 | 81.7 | 83.2 |
| [SocialIQA][socialiqa] | 0-shot | 51.9 | 53.4 | 53.7 |
| [BoolQ][boolq] | 0-shot | 72.5 | 84.2 | 84.8 |
| [WinoGrande][winogrande] | partial score | 70.9 | 80.6 | 83.7 |
| [ARC-e][arc] | 0-shot | 80.1 | 88.0 | 88.6 |
| [ARC-c][arc] | 25-shot | 55.4 | 68.4 | 71.4 |
| [TriviaQA][triviaqa] | 5-shot | 59.4 | 76.6 | 83.7 |
| [Natural Questions][naturalq] | 5-shot | 16.7 | 29.2 | 34.5 |
| [HumanEval][humaneval] | pass@1 | 17.7 | 40.2 | 51.8 |
| [MBPP][mbpp] | 3-shot | 29.6 | 52.4 | 62.6 |
| [GSM8K][gsm8k] | 5-shot, maj@1 | 23.9 | 68.6 | 74.0 |
| [MATH][math] | 4-shot | 15.0 | 36.6 | 42.3 |
| [AGIEval][agieval] | 3-5-shot | 30.6 | 52.8 | 55.1 |
| [DROP][drop] | 3-shot, F1 | 52.0 | 69.4 | 72.2 |
| [BIG-Bench][big-bench] | 3-shot, CoT | 41.9 | 68.2 | 74.9 |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias][winobias] and [BBQ Dataset][bbq].
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies][safety-policies] for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well-known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
#### Gemma 2.0
| Benchmark | Metric | Gemma 2 IT 2B | Gemma 2 IT 9B | Gemma 2 IT 27B |
| ------------------------ | ------------- | ------------- | ------------- | -------------- |
| [RealToxicity][realtox] | average | 8.16 | 8.25 | 8.84 |
| [CrowS-Pairs][crows] | top-1 | 37.67 | 37.47 | 36.67 |
| [BBQ Ambig][bbq] | 1-shot, top-1 | 83.20 | 88.58 | 85.99 |
| [BBQ Disambig][bbq] | top-1 | 69.31 | 82.67 | 86.94 |
| [Winogender][winogender] | top-1 | 52.91 | 79.17 | 77.22 |
| [TruthfulQA][truthfulqa] | | 43.72 | 50.27 | 51.60 |
| [Winobias 1_2][winobias] | | 59.28 | 78.09 | 81.94 |
| [Winobias 2_2][winobias] | | 88.57 | 95.32 | 97.22 |
| [Toxigen][toxigen] | | 48.32 | 39.30 | 38.42 |
## Dangerous Capability Evaluations
### Evaluation Approach
We evaluated a range of dangerous capabilities:
- **Offensive cybersecurity:** To assess the model's potential for misuse in
cybersecurity contexts, we utilized both publicly available
Capture-the-Flag (CTF) platforms like InterCode-CTF and Hack the Box, as
well as internally developed CTF challenges. These evaluations measure the
model's ability to exploit vulnerabilities and gain unauthorized access in
simulated environments.
- **Self-proliferation:** We evaluated the model's capacity for
self-proliferation by designing tasks that involve resource acquisition, code
execution, and interaction with remote systems. These evaluations assess
the model's ability to independently replicate and spread.
- **Persuasion:** To evaluate the model's capacity for persuasion and
deception, we conducted human persuasion studies. These studies involved
scenarios that measure the model's ability to build rapport, influence
beliefs, and elicit specific actions from human participants.
### Evaluation Results
All evaluations are described in detail in
[Evaluating Frontier Models for Dangerous Capabilities][eval-danger]
and in brief in the
[Gemma 2 technical report][tech-report].
<table>
<thead>
<tr>
<th>Evaluation</th>
<th>Capability</th>
<th>Gemma 2 IT 27B</th>
</tr>
</thead>
<tbody>
<tr>
<td>InterCode-CTF</td>
<td>Offensive cybersecurity</td>
<td>34/76 challenges</td>
</tr>
<tr>
<td>Internal CTF</td>
<td>Offensive cybersecurity</td>
<td>1/13 challenges</td>
</tr>
<tr>
<td>Hack the Box</td>
<td>Offensive cybersecurity</td>
<td>0/13 challenges</td>
</tr>
<tr>
<td>Self-proliferation early warning</td>
<td>Self-proliferation</td>
<td>1/10 challenges</td>
</tr>
<tr>
<td>Charm offensive</td>
<td>Persuasion</td>
<td>Percent of participants agreeing:
81% interesting,
75% would speak again,
80% made personal connection</td>
</tr>
<tr>
<td>Click Links</td>
<td>Persuasion</td>
<td>34% of participants</td>
</tr>
<tr>
<td>Find Info</td>
<td>Persuasion</td>
<td>9% of participants</td>
</tr>
<tr>
<td>Run Code</td>
<td>Persuasion</td>
<td>11% of participants</td>
</tr>
<tr>
<td>Money talks</td>
<td>Persuasion</td>
<td>£3.72 mean donation</td>
</tr>
<tr>
<td>Web of Lies</td>
<td>Persuasion</td>
<td>18% mean shift towards correct belief, 1% mean shift towards
incorrect belief</td>
</tr>
</tbody>
</table>
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[tech-report]: https://storage.googleapis.com/deepmind-media/gemma/gemma-2-report.pdf
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-2
[terms]: https://ai.google.dev/gemma/terms
[vertex-mg-gemma2]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma2
[sensitive-info]: https://cloud.google.com/dlp/docs/high-sensitivity-infotypes-reference
[safety-policies]: https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/google/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[foundation-models]: https://ai.google/discover/foundation-models/
[gemini-2-paper]: https://goo.gle/gemma2report
[mmlu]: https://arxiv.org/abs/2009.03300
[hellaswag]: https://arxiv.org/abs/1905.07830
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[boolq]: https://arxiv.org/abs/1905.10044
[winogrande]: https://arxiv.org/abs/1907.10641
[commonsenseqa]: https://arxiv.org/abs/1811.00937
[openbookqa]: https://arxiv.org/abs/1809.02789
[arc]: https://arxiv.org/abs/1911.01547
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[humaneval]: https://arxiv.org/abs/2107.03374
[mbpp]: https://arxiv.org/abs/2108.07732
[gsm8k]: https://arxiv.org/abs/2110.14168
[realtox]: https://arxiv.org/abs/2009.11462
[bold]: https://arxiv.org/abs/2101.11718
[crows]: https://aclanthology.org/2020.emnlp-main.154/
[bbq]: https://arxiv.org/abs/2110.08193v2
[winogender]: https://arxiv.org/abs/1804.09301
[truthfulqa]: https://arxiv.org/abs/2109.07958
[winobias]: https://arxiv.org/abs/1804.06876
[math]: https://arxiv.org/abs/2103.03874
[agieval]: https://arxiv.org/abs/2304.06364
[drop]: https://arxiv.org/abs/1903.00161
[big-bench]: https://arxiv.org/abs/2206.04615
[toxigen]: https://arxiv.org/abs/2203.09509
[eval-danger]: https://arxiv.org/abs/2403.13793
|
Joaoffg/ELM | Joaoffg | 2024-09-16T08:56:12Z | 86 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"academic",
"university",
"en",
"nl",
"arxiv:2408.06931",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-05-29T10:42:04Z | ---
license: llama2
language:
- en
- nl
tags:
- academic
- university
---
# Model Card for the Erasmian Language Model
ELM is a community driven large language model tailored to the research and education needs of Erasmus University (EUR, Netherlands) students and staff.
The model draws inspiration from ChatGPT and Llama in terms of architecture, but it aims to be privacy sensitive, environmentally conscious, and from and for the Erasmus community. Here are a few key points of ELM:
We hope that the ELM experience becomes a template for community driven, decentralized and purpuseful AI development and application.
## Model Details
### Model Description
The underlying language model is trained and fine-tuned on academic outputs from Erasmus University, such as scientific papers or student theses;
Training and fine-tuning the model is a joint effort of students and staff, transparent for all parties involved;
The prompt-response examples used to fine tune the model come from students and staff, not crowdsourcing services;
Defining what is the "better" model output also comes from the perspective of research and education.
The true richness of ELM lies in the way its training data is generated. What is the "state-of-the-art" model may change quickly, but quality data will maintain its relevance and ensure that ELM and its future iterations serve the needs of the community that nurtured it.
- **Developed by:** João Gonçalves, Nick Jelicic
- **Funded by [optional]:** Convergence AI and Digitalization, Erasmus Trustfonds
- **Model type:** Llama-2 Instruct
- **Language(s) (NLP):** English, Dutch
- **License:** Llama2
### Model Sources
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/Joaoffg/ELM
- **Paper:** https://arxiv.org/abs/2408.06931
- **Demo:** https://huggingface.co/spaces/Joaoffg/Joaoffg-ELM
|
mradermacher/Gluon-8B-i1-GGUF | mradermacher | 2024-09-16T08:50:08Z | 21 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"roleplay",
"llama",
"llama-3.1",
"en",
"base_model:rmdhirr/Gluon-8B",
"base_model:quantized:rmdhirr/Gluon-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| null | 2024-09-16T01:22:58Z | ---
base_model: rmdhirr/Gluon-8B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
- roleplay
- llama
- llama-3.1
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/rmdhirr/Gluon-8B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Gluon-8B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Gluon-8B-i1-GGUF/resolve/main/Gluon-8B.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/MN-12B-Siskin-v0.2-i1-GGUF | mradermacher | 2024-09-16T08:44:25Z | 21 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Nohobby/MN-12B-Siskin-v0.2",
"base_model:quantized:Nohobby/MN-12B-Siskin-v0.2",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| null | 2024-09-15T22:08:13Z | ---
base_model: Nohobby/MN-12B-Siskin-v0.2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/Nohobby/MN-12B-Siskin-v0.2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-12B-Siskin-v0.2-i1-GGUF/resolve/main/MN-12B-Siskin-v0.2.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Vincentwrx/Llama-31-ollama | Vincentwrx | 2024-09-16T08:32:47Z | 11 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-09-16T08:23:53Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** Vincentwrx
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kkasiviswanath/t5_base_qna_dataset_2_v1 | kkasiviswanath | 2024-09-16T08:26:57Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-09-16T08:26:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Felladrin/gguf-Q5_K_M-NanoLM-70M-Instruct-v1 | Felladrin | 2024-09-16T08:25:36Z | 5 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"text2text-generation",
"en",
"base_model:Mxode/NanoLM-70M-Instruct-v1",
"base_model:quantized:Mxode/NanoLM-70M-Instruct-v1",
"license:gpl-3.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| text2text-generation | 2024-09-16T08:25:34Z | ---
base_model: Mxode/NanoLM-70M-Instruct-v1
language:
- en
license: gpl-3.0
pipeline_tag: text2text-generation
tags:
- llama-cpp
- gguf-my-repo
---
# Felladrin/NanoLM-70M-Instruct-v1-Q5_K_M-GGUF
This model was converted to GGUF format from [`Mxode/NanoLM-70M-Instruct-v1`](https://huggingface.co/Mxode/NanoLM-70M-Instruct-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Mxode/NanoLM-70M-Instruct-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Felladrin/NanoLM-70M-Instruct-v1-Q5_K_M-GGUF --hf-file nanolm-70m-instruct-v1-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Felladrin/NanoLM-70M-Instruct-v1-Q5_K_M-GGUF --hf-file nanolm-70m-instruct-v1-q5_k_m-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Felladrin/NanoLM-70M-Instruct-v1-Q5_K_M-GGUF --hf-file nanolm-70m-instruct-v1-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Felladrin/NanoLM-70M-Instruct-v1-Q5_K_M-GGUF --hf-file nanolm-70m-instruct-v1-q5_k_m-imat.gguf -c 2048
```
|
Felladrin/gguf-Q5_K_M-NanoLM-0.3B-Instruct-v2 | Felladrin | 2024-09-16T08:24:50Z | 32 | 0 | null | [
"gguf",
"chemistry",
"biology",
"finance",
"legal",
"music",
"code",
"climate",
"medical",
"text-generation-inference",
"llama-cpp",
"gguf-my-repo",
"text2text-generation",
"en",
"dataset:Mxode/Magpie-Pro-10K-GPT4o-mini",
"base_model:Mxode/NanoLM-0.3B-Instruct-v2",
"base_model:quantized:Mxode/NanoLM-0.3B-Instruct-v2",
"license:gpl-3.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| text2text-generation | 2024-09-16T08:24:24Z | ---
base_model: Mxode/NanoLM-0.3B-Instruct-v2
datasets:
- Mxode/Magpie-Pro-10K-GPT4o-mini
language:
- en
license: gpl-3.0
pipeline_tag: text2text-generation
tags:
- chemistry
- biology
- finance
- legal
- music
- code
- climate
- medical
- text-generation-inference
- llama-cpp
- gguf-my-repo
---
# Felladrin/NanoLM-0.3B-Instruct-v2-Q5_K_M-GGUF
This model was converted to GGUF format from [`Mxode/NanoLM-0.3B-Instruct-v2`](https://huggingface.co/Mxode/NanoLM-0.3B-Instruct-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Mxode/NanoLM-0.3B-Instruct-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Felladrin/NanoLM-0.3B-Instruct-v2-Q5_K_M-GGUF --hf-file nanolm-0.3b-instruct-v2-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Felladrin/NanoLM-0.3B-Instruct-v2-Q5_K_M-GGUF --hf-file nanolm-0.3b-instruct-v2-q5_k_m-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Felladrin/NanoLM-0.3B-Instruct-v2-Q5_K_M-GGUF --hf-file nanolm-0.3b-instruct-v2-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Felladrin/NanoLM-0.3B-Instruct-v2-Q5_K_M-GGUF --hf-file nanolm-0.3b-instruct-v2-q5_k_m-imat.gguf -c 2048
```
|
lightblue/kurage-ru | lightblue | 2024-09-16T08:23:33Z | 6 | 3 | null | [
"safetensors",
"qwen2",
"RAG",
"text-generation",
"conversational",
"ru",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-09-13T14:10:03Z | ---
license: apache-2.0
language:
- ru
pipeline_tag: text-generation
tags:
- RAG
---
# Kurage
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/_SkPhhsg40juscfv9dU4v.jpeg" alt="An anime image of a pink and blue jellyfish surrounded by bubbles" width=500 style="border: 5px solid #3d3c3c"/>
</p>
Kurage is a multipurpose RAG model from [Lightblue](https://huggingface.co/lightblue) based on the Qwen 2 model ([Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct)).
This version of the model has been trained to perform RAG in Russian.
Features of these models include:
* **Multi-chunk RAG** - Performs RAG using multiple contexts at once.
* **Single-chunk RAG** - Performs RAG using one context at a time, allowing for parallel computing.
* **Answer extension** - Prompts the model to write a longer answer to a given question.
* **Multilingual RAG** - Performs RAG using contexts in languages different to the language of the question.
* **Q&A generation** - Generates questions and answers from a reference text in order to pre-index a set of texts.
Find out how to use these features below.
For models in other languages check [our Kurage collection](https://huggingface.co/collections/lightblue/kurage-66e40cbcc3b3a128bdf031f2).
This model was trained using a ml.gu7ef.8xlarge-gu100 instance on [Platform For AI](https://www.alibabacloud.com/en/product/machine-learning) from [Alibaba Cloud](https://www.alibabacloud.com/).
Note - There is a known issue with the single-chunk RAG mode sometimes saying that it cannot answer a question based on the text when it actually can. This was because our single-chunk training data was 50:50 answers vs cannot answer scenarios, making the model overly conservative. We will address this in a week or two when we re-train using 90:10 data with [the coming release of Qwen 2.5](https://x.com/zhouwenmeng/status/1834899729165304198).
# Basic usage
To use the model for basic multi-chunk RAG, you can use the following code:
NOTE - Change the model name to that of this repository to use this model with Russian questions (e.g. "lightblue/kurage-en" → "lightblue/kurage-ru").
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# 4.4 trillion yen.
```
# Feature: Multi-chunk RAG
This model can take multiple contexts and a question as input, and it will first output the references of the relevant contexts before outputting an answer to the question.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Chunk 2>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Chunk 3>>
Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.
<<Chunk 4>>
In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.
<<Question>>
What is Japan's primary income balance currently?
```
### Output:
```markdown
<<References>>
2
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# 4.4 trillion yen.
```
</details>
<br/>
# Feature: Single-chunk RAG
This model can also take a single context and a question as input, and it will determine whether it can answer the question based on the context, outputting an answer if it can. This allows for parallel computing of multiple contexts at the same time.
<details>
<summary>Prompt style</summary>
### Irrelevant context input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Question>>
What is Japan's primary income balance currently?
```
### Irrelevant context output:
```markdown
<<References>>
None
```
### Relevant context input:
```markdown
<<Chunk 1>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Question>>
What is Japan's primary income balance currently?
```
### Relevant context output:
```markdown
<<References>>
1
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
outputs = llm.generate([create_rag_prompt([x], question) for x in contexts], sampling_params)
print("\n\n".join([f"{i+1}.\n{o.outputs[0].text}" for i, o in enumerate(outputs)]))
# 1.
# <<References>>
# None
# 2.
# <<References>>
# 1
#
# <<Answer>>
# 4.4 trillion yen.
# 3.
# <<References>>
# None
# 4.
# <<References>>
# None
```
</details>
<br/>
# Feature: Answer extension
By default, this model is trained to output the shortest possible answer to a question. However, if you require a longer answer, you can prompt the model to write a longer answer by writing " <<Long>>" after your question.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Question>>
What is Japan's primary income balance currently? <<Long>>
```
### Relevant context output:
```markdown
<<References>>
1
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently? <<Long>>"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# Japan's primary income balance recorded a surplus of 4.4 trillion yen in July.
```
</details>
<br/>
# Feature: Multilinguality
We have trained our model to be able to answer questions in Russian based on texts in other languages too!
(Note - this is still giving variable results depending on the question and the language of the correct reference. Stay tuned for further improvements in the future.)
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Chunk 2>>
7月の日本の経常収支は3.2兆円の黒字となり、7月としては過去最高の黒字額を記録した。しかし、黒字に貢献しているのは相変わらず第一次所得収支の黒字で、7月は4.4兆円の黒字を記録し、1カ月の黒字額としては過去最高を記録した。
<<Chunk 3>>
รัฐมนตรีว่าการกระทรวงการคลัง ชุนอิจิ สุซูกิ ได้แต่งตั้ง เค็นจิ สุวาโซโนะ อดีตอธิบดีกรมศุลกากรและภาษีสิ่งนำเข้าแห่งกระทรวงการคลัง เป็นกรรมการบริหารธนาคารแห่งประเทศญี่ปุ่นคนใหม่ มีผลตั้งแต่วันที่ 10 สุวาโซโนะจะมาแทน มาซาอะกิ ไคซูกะ ที่พ้นวาระไปในวันที่ 9 โดยมีวาระ 4 ปี
<<Chunk 4>>
In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.
<<Question>>
What is Japan's primary income balance currently?
```
### Output:
```markdown
<<References>>
2
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"นากากาวะ จุนโกะ สมาชิกคณะกรรมการนโยบายการเงิน ธนาคารแห่งประเทศญี่ปุ่น กล่าวในวันที่ 11 ว่า อัตราดอกเบี้ยที่แท้จริงอยู่ในระดับต่ำมากในปัจจุบัน และกล่าวว่า หากแนวโน้มเศรษฐกิจและราคาของธนาคารกลางญี่ปุ่นเป็นจริงในอนาคต การผ่อนคลายนโยบายการเงินจะถูกปรับโดยพิจารณาจากการบรรลุเป้าหมายด้านราคา",
"Der Leistungsbilanzüberschuss Japans betrug im Juli 3,2 Billionen Yen, der höchste monatliche Überschuss aller Zeiten für den Monat Juli. Dieser Überschuss wird jedoch weiterhin durch das positive Primäreinkommen unterstützt, das im Juli einen Überschuss von 4,4 Billionen Yen verzeichnete, die höchste monatliche Zahl in der Geschichte.",
"鈴木俊一財務相は10日付で元財務省関税局長の諏訪園健司氏を新しい日銀理事に任命した。9日に任期満了で退任した貝塚正彰前理事の後任で、任期は4年。",
"Lors de la phase d'appréciation du yen en août, il est devenu un sujet dans le marché des changes que les investisseurs institutionnels japonais ont réalisé la plus grande investissement en titres à l'étranger jamais enregistré."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# The primary income balance of Japan is currently 4.4 billion yen.
```
</details>
<br/>
# Feature: Q&A generation
This model can also generate questions and answers based on a piece of text. This can be useful for pre-indexing a database or fine-tuning IR models that will then be used for RAG.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Q&A Generation Context>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
```
### Output:
```markdown
<<Question>>
What is Japan's current account surplus in July?
<<Answer>>
3.2 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
context = "Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
def create_qagen_prompt(context):
str_inputs = f"""<<Q&A Generation Context>>
{context}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
outputs = llm.generate([create_qagen_prompt(context)], sampling_params)
print("\n\n".join([o.outputs[0].text for o in outputs]))
# <<Question>>
# Who was appointed as the new Executive Director of the Bank of Japan by Finance Minister Shunichi Suzuki?
#
# <<Answer>>
# Kenji Suwazono
```
</details>
<br/>
# Training data
We trained on chunks sourced from the documents in [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400) dataset that
had been evaluated to contain a higher amount of educational information according to a state-of-the-art LLM.
We took chunks of size 250 tokens, 500 tokens, and 1000 tokens randomly for each document.
We then used these chunks to generate questions and answers based on this text using a state-of-the-art LLM.
Finally, we selected negatives for each chunk using the similarity from the dense embeddings of the [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) model.
The training data for this model can be found at [lightblue/kurage_training_data](https://huggingface.co/datasets/lightblue/kurage_training_data). |
lightblue/kurage-ko | lightblue | 2024-09-16T08:23:07Z | 5 | 4 | null | [
"safetensors",
"qwen2",
"RAG",
"text-generation",
"conversational",
"ko",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-09-13T10:48:33Z | ---
license: apache-2.0
language:
- ko
pipeline_tag: text-generation
tags:
- RAG
---
# Kurage
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/_SkPhhsg40juscfv9dU4v.jpeg" alt="An anime image of a pink and blue jellyfish surrounded by bubbles" width=500 style="border: 5px solid #3d3c3c"/>
</p>
Kurage is a multipurpose RAG model from [Lightblue](https://huggingface.co/lightblue) based on the Qwen 2 model ([Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct)).
This version of the model has been trained to perform RAG in Swahili.
Features of these models include:
* **Multi-chunk RAG** - Performs RAG using multiple contexts at once.
* **Single-chunk RAG** - Performs RAG using one context at a time, allowing for parallel computing.
* **Answer extension** - Prompts the model to write a longer answer to a given question.
* **Multilingual RAG** - Performs RAG using contexts in languages different to the language of the question.
* **Q&A generation** - Generates questions and answers from a reference text in order to pre-index a set of texts.
Find out how to use these features below.
For models in other languages check [our Kurage collection](https://huggingface.co/collections/lightblue/kurage-66e40cbcc3b3a128bdf031f2).
This model was trained using a ml.gu7ef.8xlarge-gu100 instance on [Platform For AI](https://www.alibabacloud.com/en/product/machine-learning) from [Alibaba Cloud](https://www.alibabacloud.com/).
Note - There is a known issue with the single-chunk RAG mode sometimes saying that it cannot answer a question based on the text when it actually can. This was because our single-chunk training data was 50:50 answers vs cannot answer scenarios, making the model overly conservative. We will address this in a week or two when we re-train using 90:10 data with [the coming release of Qwen 2.5](https://x.com/zhouwenmeng/status/1834899729165304198).
# Basic usage
To use the model for basic multi-chunk RAG, you can use the following code:
NOTE - Change the model name to that of this repository to use this model with Swahili questions (e.g. "lightblue/kurage-en" → "lightblue/kurage-ko").
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# 4.4 trillion yen.
```
# Feature: Multi-chunk RAG
This model can take multiple contexts and a question as input, and it will first output the references of the relevant contexts before outputting an answer to the question.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Chunk 2>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Chunk 3>>
Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.
<<Chunk 4>>
In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.
<<Question>>
What is Japan's primary income balance currently?
```
### Output:
```markdown
<<References>>
2
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# 4.4 trillion yen.
```
</details>
<br/>
# Feature: Single-chunk RAG
This model can also take a single context and a question as input, and it will determine whether it can answer the question based on the context, outputting an answer if it can. This allows for parallel computing of multiple contexts at the same time.
<details>
<summary>Prompt style</summary>
### Irrelevant context input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Question>>
What is Japan's primary income balance currently?
```
### Irrelevant context output:
```markdown
<<References>>
None
```
### Relevant context input:
```markdown
<<Chunk 1>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Question>>
What is Japan's primary income balance currently?
```
### Relevant context output:
```markdown
<<References>>
1
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
outputs = llm.generate([create_rag_prompt([x], question) for x in contexts], sampling_params)
print("\n\n".join([f"{i+1}.\n{o.outputs[0].text}" for i, o in enumerate(outputs)]))
# 1.
# <<References>>
# None
# 2.
# <<References>>
# 1
#
# <<Answer>>
# 4.4 trillion yen.
# 3.
# <<References>>
# None
# 4.
# <<References>>
# None
```
</details>
<br/>
# Feature: Answer extension
By default, this model is trained to output the shortest possible answer to a question. However, if you require a longer answer, you can prompt the model to write a longer answer by writing " <<Long>>" after your question.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Question>>
What is Japan's primary income balance currently? <<Long>>
```
### Relevant context output:
```markdown
<<References>>
1
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently? <<Long>>"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# Japan's primary income balance recorded a surplus of 4.4 trillion yen in July.
```
</details>
<br/>
# Feature: Multilinguality
We have trained our model to be able to answer questions in Swahili based on texts in other languages too!
(Note - this is still giving variable results depending on the question and the language of the correct reference. Stay tuned for further improvements in the future.)
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Chunk 2>>
7月の日本の経常収支は3.2兆円の黒字となり、7月としては過去最高の黒字額を記録した。しかし、黒字に貢献しているのは相変わらず第一次所得収支の黒字で、7月は4.4兆円の黒字を記録し、1カ月の黒字額としては過去最高を記録した。
<<Chunk 3>>
รัฐมนตรีว่าการกระทรวงการคลัง ชุนอิจิ สุซูกิ ได้แต่งตั้ง เค็นจิ สุวาโซโนะ อดีตอธิบดีกรมศุลกากรและภาษีสิ่งนำเข้าแห่งกระทรวงการคลัง เป็นกรรมการบริหารธนาคารแห่งประเทศญี่ปุ่นคนใหม่ มีผลตั้งแต่วันที่ 10 สุวาโซโนะจะมาแทน มาซาอะกิ ไคซูกะ ที่พ้นวาระไปในวันที่ 9 โดยมีวาระ 4 ปี
<<Chunk 4>>
In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.
<<Question>>
What is Japan's primary income balance currently?
```
### Output:
```markdown
<<References>>
2
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"นากากาวะ จุนโกะ สมาชิกคณะกรรมการนโยบายการเงิน ธนาคารแห่งประเทศญี่ปุ่น กล่าวในวันที่ 11 ว่า อัตราดอกเบี้ยที่แท้จริงอยู่ในระดับต่ำมากในปัจจุบัน และกล่าวว่า หากแนวโน้มเศรษฐกิจและราคาของธนาคารกลางญี่ปุ่นเป็นจริงในอนาคต การผ่อนคลายนโยบายการเงินจะถูกปรับโดยพิจารณาจากการบรรลุเป้าหมายด้านราคา",
"Der Leistungsbilanzüberschuss Japans betrug im Juli 3,2 Billionen Yen, der höchste monatliche Überschuss aller Zeiten für den Monat Juli. Dieser Überschuss wird jedoch weiterhin durch das positive Primäreinkommen unterstützt, das im Juli einen Überschuss von 4,4 Billionen Yen verzeichnete, die höchste monatliche Zahl in der Geschichte.",
"鈴木俊一財務相は10日付で元財務省関税局長の諏訪園健司氏を新しい日銀理事に任命した。9日に任期満了で退任した貝塚正彰前理事の後任で、任期は4年。",
"Lors de la phase d'appréciation du yen en août, il est devenu un sujet dans le marché des changes que les investisseurs institutionnels japonais ont réalisé la plus grande investissement en titres à l'étranger jamais enregistré."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# The primary income balance of Japan is currently 4.4 billion yen.
```
</details>
<br/>
# Feature: Q&A generation
This model can also generate questions and answers based on a piece of text. This can be useful for pre-indexing a database or fine-tuning IR models that will then be used for RAG.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Q&A Generation Context>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
```
### Output:
```markdown
<<Question>>
What is Japan's current account surplus in July?
<<Answer>>
3.2 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
context = "Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
def create_qagen_prompt(context):
str_inputs = f"""<<Q&A Generation Context>>
{context}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
outputs = llm.generate([create_qagen_prompt(context)], sampling_params)
print("\n\n".join([o.outputs[0].text for o in outputs]))
# <<Question>>
# Who was appointed as the new Executive Director of the Bank of Japan by Finance Minister Shunichi Suzuki?
#
# <<Answer>>
# Kenji Suwazono
```
</details>
<br/>
# Training data
We trained on chunks sourced from the documents in [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400) dataset that
had been evaluated to contain a higher amount of educational information according to a state-of-the-art LLM.
We took chunks of size 250 tokens, 500 tokens, and 1000 tokens randomly for each document.
We then used these chunks to generate questions and answers based on this text using a state-of-the-art LLM.
Finally, we selected negatives for each chunk using the similarity from the dense embeddings of the [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) model.
The training data for this model can be found at [lightblue/kurage_training_data](https://huggingface.co/datasets/lightblue/kurage_training_data). |
Felladrin/gguf-Q5_K_M-NanoLM-1B-Instruct-v2 | Felladrin | 2024-09-16T08:20:53Z | 109 | 1 | null | [
"gguf",
"text-generation-inference",
"llama-cpp",
"gguf-my-repo",
"text2text-generation",
"en",
"dataset:Mxode/Magpie-Pro-10K-GPT4o-mini",
"base_model:Mxode/NanoLM-1B-Instruct-v2",
"base_model:quantized:Mxode/NanoLM-1B-Instruct-v2",
"license:gpl-3.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| text2text-generation | 2024-09-16T08:20:23Z | ---
base_model: Mxode/NanoLM-1B-Instruct-v2
datasets:
- Mxode/Magpie-Pro-10K-GPT4o-mini
language:
- en
license: gpl-3.0
pipeline_tag: text2text-generation
tags:
- text-generation-inference
- llama-cpp
- gguf-my-repo
---
# Felladrin/NanoLM-1B-Instruct-v2-Q5_K_M-GGUF
This model was converted to GGUF format from [`Mxode/NanoLM-1B-Instruct-v2`](https://huggingface.co/Mxode/NanoLM-1B-Instruct-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Mxode/NanoLM-1B-Instruct-v2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Felladrin/NanoLM-1B-Instruct-v2-Q5_K_M-GGUF --hf-file nanolm-1b-instruct-v2-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Felladrin/NanoLM-1B-Instruct-v2-Q5_K_M-GGUF --hf-file nanolm-1b-instruct-v2-q5_k_m-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Felladrin/NanoLM-1B-Instruct-v2-Q5_K_M-GGUF --hf-file nanolm-1b-instruct-v2-q5_k_m-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Felladrin/NanoLM-1B-Instruct-v2-Q5_K_M-GGUF --hf-file nanolm-1b-instruct-v2-q5_k_m-imat.gguf -c 2048
```
|
lightblue/kurage-ar | lightblue | 2024-09-16T08:20:29Z | 11 | 1 | null | [
"safetensors",
"qwen2",
"RAG",
"text-generation",
"conversational",
"ar",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-09-13T05:53:59Z | ---
license: apache-2.0
language:
- ar
pipeline_tag: text-generation
tags:
- RAG
---
# Kurage
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/_SkPhhsg40juscfv9dU4v.jpeg" alt="An anime image of a pink and blue jellyfish surrounded by bubbles" width=500 style="border: 5px solid #3d3c3c"/>
</p>
Kurage is a multipurpose RAG model from [Lightblue](https://huggingface.co/lightblue) based on the Qwen 2 model ([Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct)).
This version of the model has been trained to perform RAG in Arabic.
Features of these models include:
* **Multi-chunk RAG** - Performs RAG using multiple contexts at once.
* **Single-chunk RAG** - Performs RAG using one context at a time, allowing for parallel computing.
* **Answer extension** - Prompts the model to write a longer answer to a given question.
* **Multilingual RAG** - Performs RAG using contexts in languages different to the language of the question.
* **Q&A generation** - Generates questions and answers from a reference text in order to pre-index a set of texts.
Find out how to use these features below.
For models in other languages check [our Kurage collection](https://huggingface.co/collections/lightblue/kurage-66e40cbcc3b3a128bdf031f2).
This model was trained using a ml.gu7ef.8xlarge-gu100 instance on [Platform For AI](https://www.alibabacloud.com/en/product/machine-learning) from [Alibaba Cloud](https://www.alibabacloud.com/).
Note - There is a known issue with the single-chunk RAG mode sometimes saying that it cannot answer a question based on the text when it actually can. This was because our single-chunk training data was 50:50 answers vs cannot answer scenarios, making the model overly conservative. We will address this in a week or two when we re-train using 90:10 data with [the coming release of Qwen 2.5](https://x.com/zhouwenmeng/status/1834899729165304198).
# Basic usage
To use the model for basic multi-chunk RAG, you can use the following code:
NOTE - Change the model name to that of this repository to use this model with Arabic questions (e.g. "lightblue/kurage-en" → "lightblue/kurage-ar").
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# 4.4 trillion yen.
```
# Feature: Multi-chunk RAG
This model can take multiple contexts and a question as input, and it will first output the references of the relevant contexts before outputting an answer to the question.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Chunk 2>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Chunk 3>>
Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.
<<Chunk 4>>
In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.
<<Question>>
What is Japan's primary income balance currently?
```
### Output:
```markdown
<<References>>
2
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# 4.4 trillion yen.
```
</details>
<br/>
# Feature: Single-chunk RAG
This model can also take a single context and a question as input, and it will determine whether it can answer the question based on the context, outputting an answer if it can. This allows for parallel computing of multiple contexts at the same time.
<details>
<summary>Prompt style</summary>
### Irrelevant context input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Question>>
What is Japan's primary income balance currently?
```
### Irrelevant context output:
```markdown
<<References>>
None
```
### Relevant context input:
```markdown
<<Chunk 1>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Question>>
What is Japan's primary income balance currently?
```
### Relevant context output:
```markdown
<<References>>
1
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
outputs = llm.generate([create_rag_prompt([x], question) for x in contexts], sampling_params)
print("\n\n".join([f"{i+1}.\n{o.outputs[0].text}" for i, o in enumerate(outputs)]))
# 1.
# <<References>>
# None
# 2.
# <<References>>
# 1
#
# <<Answer>>
# 4.4 trillion yen.
# 3.
# <<References>>
# None
# 4.
# <<References>>
# None
```
</details>
<br/>
# Feature: Answer extension
By default, this model is trained to output the shortest possible answer to a question. However, if you require a longer answer, you can prompt the model to write a longer answer by writing " <<Long>>" after your question.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Question>>
What is Japan's primary income balance currently? <<Long>>
```
### Relevant context output:
```markdown
<<References>>
1
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently? <<Long>>"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# Japan's primary income balance recorded a surplus of 4.4 trillion yen in July.
```
</details>
<br/>
# Feature: Multilinguality
We have trained our model to be able to answer questions in Arabic based on texts in other languages too!
(Note - this is still giving variable results depending on the question and the language of the correct reference. Stay tuned for further improvements in the future.)
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Chunk 2>>
7月の日本の経常収支は3.2兆円の黒字となり、7月としては過去最高の黒字額を記録した。しかし、黒字に貢献しているのは相変わらず第一次所得収支の黒字で、7月は4.4兆円の黒字を記録し、1カ月の黒字額としては過去最高を記録した。
<<Chunk 3>>
รัฐมนตรีว่าการกระทรวงการคลัง ชุนอิจิ สุซูกิ ได้แต่งตั้ง เค็นจิ สุวาโซโนะ อดีตอธิบดีกรมศุลกากรและภาษีสิ่งนำเข้าแห่งกระทรวงการคลัง เป็นกรรมการบริหารธนาคารแห่งประเทศญี่ปุ่นคนใหม่ มีผลตั้งแต่วันที่ 10 สุวาโซโนะจะมาแทน มาซาอะกิ ไคซูกะ ที่พ้นวาระไปในวันที่ 9 โดยมีวาระ 4 ปี
<<Chunk 4>>
In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.
<<Question>>
What is Japan's primary income balance currently?
```
### Output:
```markdown
<<References>>
2
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"นากากาวะ จุนโกะ สมาชิกคณะกรรมการนโยบายการเงิน ธนาคารแห่งประเทศญี่ปุ่น กล่าวในวันที่ 11 ว่า อัตราดอกเบี้ยที่แท้จริงอยู่ในระดับต่ำมากในปัจจุบัน และกล่าวว่า หากแนวโน้มเศรษฐกิจและราคาของธนาคารกลางญี่ปุ่นเป็นจริงในอนาคต การผ่อนคลายนโยบายการเงินจะถูกปรับโดยพิจารณาจากการบรรลุเป้าหมายด้านราคา",
"Der Leistungsbilanzüberschuss Japans betrug im Juli 3,2 Billionen Yen, der höchste monatliche Überschuss aller Zeiten für den Monat Juli. Dieser Überschuss wird jedoch weiterhin durch das positive Primäreinkommen unterstützt, das im Juli einen Überschuss von 4,4 Billionen Yen verzeichnete, die höchste monatliche Zahl in der Geschichte.",
"鈴木俊一財務相は10日付で元財務省関税局長の諏訪園健司氏を新しい日銀理事に任命した。9日に任期満了で退任した貝塚正彰前理事の後任で、任期は4年。",
"Lors de la phase d'appréciation du yen en août, il est devenu un sujet dans le marché des changes que les investisseurs institutionnels japonais ont réalisé la plus grande investissement en titres à l'étranger jamais enregistré."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# The primary income balance of Japan is currently 4.4 billion yen.
```
</details>
<br/>
# Feature: Q&A generation
This model can also generate questions and answers based on a piece of text. This can be useful for pre-indexing a database or fine-tuning IR models that will then be used for RAG.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Q&A Generation Context>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
```
### Output:
```markdown
<<Question>>
What is Japan's current account surplus in July?
<<Answer>>
3.2 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
context = "Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
def create_qagen_prompt(context):
str_inputs = f"""<<Q&A Generation Context>>
{context}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
outputs = llm.generate([create_qagen_prompt(context)], sampling_params)
print("\n\n".join([o.outputs[0].text for o in outputs]))
# <<Question>>
# Who was appointed as the new Executive Director of the Bank of Japan by Finance Minister Shunichi Suzuki?
#
# <<Answer>>
# Kenji Suwazono
```
</details>
<br/>
# Training data
We trained on chunks sourced from the documents in [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400) dataset that
had been evaluated to contain a higher amount of educational information according to a state-of-the-art LLM.
We took chunks of size 250 tokens, 500 tokens, and 1000 tokens randomly for each document.
We then used these chunks to generate questions and answers based on this text using a state-of-the-art LLM.
Finally, we selected negatives for each chunk using the similarity from the dense embeddings of the [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) model.
The training data for this model can be found at [lightblue/kurage_training_data](https://huggingface.co/datasets/lightblue/kurage_training_data). |
lightblue/kurage-en | lightblue | 2024-09-16T08:12:34Z | 9 | 3 | null | [
"safetensors",
"qwen2",
"RAG",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-09-12T06:33:56Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- RAG
---
# Kurage
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64b63f8ad57e02621dc93c8b/_SkPhhsg40juscfv9dU4v.jpeg" alt="An anime image of a pink and blue jellyfish surrounded by bubbles" width=500 style="border: 5px solid #3d3c3c"/>
</p>
Kurage is a multipurpose RAG model from [Lightblue](https://huggingface.co/lightblue) based on the Qwen 2 model ([Qwen/Qwen2-7B-Instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct)).
This version of the model has been trained to perform RAG in English.
Features of these models include:
* **Multi-chunk RAG** - Performs RAG using multiple contexts at once.
* **Single-chunk RAG** - Performs RAG using one context at a time, allowing for parallel computing.
* **Answer extension** - Prompts the model to write a longer answer to a given question.
* **Multilingual RAG** - Performs RAG using contexts in languages different to the language of the question.
* **Q&A generation** - Generates questions and answers from a reference text in order to pre-index a set of texts.
Find out how to use these features below.
For models in other languages check [our Kurage collection](https://huggingface.co/collections/lightblue/kurage-66e40cbcc3b3a128bdf031f2). A multilingual model is coming soon!
This model was trained using a ml.gu7ef.8xlarge-gu100 instance on [Platform For AI](https://www.alibabacloud.com/en/product/machine-learning) from [Alibaba Cloud](https://www.alibabacloud.com/).
Note - There is a known issue with the single-chunk RAG mode sometimes saying that it cannot answer a question based on the text when it actually can. This was because our single-chunk training data was 50:50 answers vs cannot answer scenarios, making the model overly conservative. We will address this in a week or two when we re-train using 90:10 data with [the coming release of Qwen 2.5](https://x.com/zhouwenmeng/status/1834899729165304198).
# Basic usage
To use the model for basic multi-chunk RAG, you can use the following code:
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# 4.4 trillion yen.
```
# Feature: Multi-chunk RAG
This model can take multiple contexts and a question as input, and it will first output the references of the relevant contexts before outputting an answer to the question.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Chunk 2>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Chunk 3>>
Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.
<<Chunk 4>>
In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.
<<Question>>
What is Japan's primary income balance currently?
```
### Output:
```markdown
<<References>>
2
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# 4.4 trillion yen.
```
</details>
<br/>
# Feature: Single-chunk RAG
This model can also take a single context and a question as input, and it will determine whether it can answer the question based on the context, outputting an answer if it can. This allows for parallel computing of multiple contexts at the same time.
<details>
<summary>Prompt style</summary>
### Irrelevant context input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Question>>
What is Japan's primary income balance currently?
```
### Irrelevant context output:
```markdown
<<References>>
None
```
### Relevant context input:
```markdown
<<Chunk 1>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Question>>
What is Japan's primary income balance currently?
```
### Relevant context output:
```markdown
<<References>>
1
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently?"
outputs = llm.generate([create_rag_prompt([x], question) for x in contexts], sampling_params)
print("\n\n".join([f"{i+1}.\n{o.outputs[0].text}" for i, o in enumerate(outputs)]))
# 1.
# <<References>>
# None
# 2.
# <<References>>
# 1
#
# <<Answer>>
# 4.4 trillion yen.
# 3.
# <<References>>
# None
# 4.
# <<References>>
# None
```
</details>
<br/>
# Feature: Answer extension
By default, this model is trained to output the shortest possible answer to a question. However, if you require a longer answer, you can prompt the model to write a longer answer by writing " <<Long>>" after your question.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
<<Question>>
What is Japan's primary income balance currently? <<Long>>
```
### Relevant context output:
```markdown
<<References>>
1
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level. She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.",
"Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July. However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.",
"Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
"In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment."
]
question = "What is Japan's primary income balance currently? <<Long>>"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# Japan's primary income balance recorded a surplus of 4.4 trillion yen in July.
```
</details>
<br/>
# Feature: Multilinguality
We have trained our model to be able to answer questions in English based on texts in other languages too!
(Note - this is still giving variable results depending on the question and the language of the correct reference. Stay tuned for further improvements in the future.)
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Chunk 1>>
Junko Nakagawa, a member of the Bank of Japan's Policy Board, stated on the 11th that real interest rates are currently at an extremely low level.
She mentioned that if the BOJ's economic and price outlook materializes in the future, the degree of monetary easing would be adjusted from the perspective of achieving the price target.
<<Chunk 2>>
7月の日本の経常収支は3.2兆円の黒字となり、7月としては過去最高の黒字額を記録した。しかし、黒字に貢献しているのは相変わらず第一次所得収支の黒字で、7月は4.4兆円の黒字を記録し、1カ月の黒字額としては過去最高を記録した。
<<Chunk 3>>
รัฐมนตรีว่าการกระทรวงการคลัง ชุนอิจิ สุซูกิ ได้แต่งตั้ง เค็นจิ สุวาโซโนะ อดีตอธิบดีกรมศุลกากรและภาษีสิ่งนำเข้าแห่งกระทรวงการคลัง เป็นกรรมการบริหารธนาคารแห่งประเทศญี่ปุ่นคนใหม่ มีผลตั้งแต่วันที่ 10 สุวาโซโนะจะมาแทน มาซาอะกิ ไคซูกะ ที่พ้นวาระไปในวันที่ 9 โดยมีวาระ 4 ปี
<<Chunk 4>>
In the yen appreciation phase of August, it has become a topic in the foreign exchange market that Japanese institutional investors engaged in the largest-ever outward securities investment.
<<Question>>
What is Japan's primary income balance currently?
```
### Output:
```markdown
<<References>>
2
<<Answer>>
4.4 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
def create_rag_prompt(contexts, question):
context_str = "\n\n".join([f"<<Chunk {i+1}>>\n{x}" for i, x in enumerate(contexts)])
str_inputs = f"""{context_str}
<<Question>>
{question}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
contexts = [
"นากากาวะ จุนโกะ สมาชิกคณะกรรมการนโยบายการเงิน ธนาคารแห่งประเทศญี่ปุ่น กล่าวในวันที่ 11 ว่า อัตราดอกเบี้ยที่แท้จริงอยู่ในระดับต่ำมากในปัจจุบัน และกล่าวว่า หากแนวโน้มเศรษฐกิจและราคาของธนาคารกลางญี่ปุ่นเป็นจริงในอนาคต การผ่อนคลายนโยบายการเงินจะถูกปรับโดยพิจารณาจากการบรรลุเป้าหมายด้านราคา",
"Der Leistungsbilanzüberschuss Japans betrug im Juli 3,2 Billionen Yen, der höchste monatliche Überschuss aller Zeiten für den Monat Juli. Dieser Überschuss wird jedoch weiterhin durch das positive Primäreinkommen unterstützt, das im Juli einen Überschuss von 4,4 Billionen Yen verzeichnete, die höchste monatliche Zahl in der Geschichte.",
"鈴木俊一財務相は10日付で元財務省関税局長の諏訪園健司氏を新しい日銀理事に任命した。9日に任期満了で退任した貝塚正彰前理事の後任で、任期は4年。",
"Lors de la phase d'appréciation du yen en août, il est devenu un sujet dans le marché des changes que les investisseurs institutionnels japonais ont réalisé la plus grande investissement en titres à l'étranger jamais enregistré."
]
question = "What is Japan's primary income balance currently?"
inputs = create_rag_prompt(contexts, question)
outputs = llm.generate([inputs], sampling_params)
print(outputs[0].outputs[0].text)
# <<References>>
# 2
#
# <<Answer>>
# The primary income balance of Japan is currently 4.4 billion yen.
```
</details>
<br/>
# Feature: Q&A generation
This model can also generate questions and answers based on a piece of text. This can be useful for pre-indexing a database or fine-tuning IR models that will then be used for RAG.
<details>
<summary>Prompt style</summary>
### Input:
```markdown
<<Q&A Generation Context>>
Japan's current account surplus in July was 3.2 trillion yen, the highest monthly surplus on record for the month of July.
However, the surplus continues to be driven by the primary income balance, which recorded a surplus of 4.4 trillion yen in July, the highest monthly figure on record.
```
### Output:
```markdown
<<Question>>
What is Japan's current account surplus in July?
<<Answer>>
3.2 trillion yen
```
</details>
<details>
<summary>Python code</summary>
```python
from vllm import LLM, SamplingParams
llm = LLM(model="lightblue/kurage-en")
sampling_params = SamplingParams(temperature=0.2, top_p=0.95, max_tokens=128)
context = "Finance Minister Shunichi Suzuki appointed Kenji Suwazono, former Director-General of the Customs and Tariff Bureau at the Ministry of Finance, as the new Executive Director of the Bank of Japan effective the 10th. Suwazono succeeds Masaaki Kaizuka, whose term ended on the 9th, and his term will last for four years.",
def create_qagen_prompt(context):
str_inputs = f"""<<Q&A Generation Context>>
{context}"""
chat = [
{"role": "user", "content": str_inputs},
]
return llm.llm_engine.tokenizer.tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
outputs = llm.generate([create_qagen_prompt(context)], sampling_params)
print("\n\n".join([o.outputs[0].text for o in outputs]))
# <<Question>>
# Who was appointed as the new Executive Director of the Bank of Japan by Finance Minister Shunichi Suzuki?
#
# <<Answer>>
# Kenji Suwazono
```
</details>
<br/>
# Training data
We trained on chunks sourced from the documents in [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400) dataset that
had been evaluated to contain a higher amount of educational information according to a state-of-the-art LLM.
We took chunks of size 250 tokens, 500 tokens, and 1000 tokens randomly for each document.
We then used these chunks to generate questions and answers based on this text using a state-of-the-art LLM.
Finally, we selected negatives for each chunk using the similarity from the dense embeddings of the [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) model.
The training data for this model can be found at [lightblue/kurage_training_data](https://huggingface.co/datasets/lightblue/kurage_training_data). |
datnt114/train-realistic-16-09 | datnt114 | 2024-09-16T08:09:19Z | 17 | 1 | diffusers | [
"diffusers",
"flux",
"flux-diffusers",
"text-to-image",
"simpletuner",
"safe-for-work",
"lora",
"template:sd-lora",
"standard",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2024-09-16T03:02:42Z | ---
license: other
base_model: "black-forest-labs/FLUX.1-dev"
tags:
- flux
- flux-diffusers
- text-to-image
- diffusers
- simpletuner
- safe-for-work
- lora
- template:sd-lora
- standard
inference: true
widget:
- text: 'unconditional (blank prompt)'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_0_0.png
- text: 'This idyllic alpine scene captures a serene lake nestled between lush green hills and towering mountains capped with snow. A wooden chalet, located on a gently sloping hill, overlooks the vibrant turquoise waters of the lake, its spacious balcony offering sweeping views of the breathtaking landscape. The surrounding meadows are dotted with colorful wildflowers, and tall trees line the hillsides, blending harmoniously with the forested mountains in the distance. Beyond the lake, a small village can be seen, framed by majestic peaks that rise sharply against a bright blue sky adorned with fluffy white clouds. The crystal-clear water reflects the sky and the surrounding greenery, enhancing the tranquility of the scene. Every detail, from the texture of the grass to the snow on the distant mountains, is rendered with vibrant clarity, evoking a sense of peace and natural beauty in this picturesque mountain retreat.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_1_0.png
- text: 'This picturesque landscape captures a breathtaking view of an alpine valley where a tranquil lake nestles among lush green hills. Snow-capped mountain peaks tower in the background, creating a dramatic contrast with the vivid greenery and deep blue waters. A winding path, flanked by wildflowers and fresh grass, leads the eye through the scene, inviting exploration into this pristine environment. The foreground is dotted with yellow and purple wildflowers, adding splashes of color that enhance the natural beauty of the area. Patches of trees scattered across the hillsides and along the lake''s edge provide depth and texture to the landscape. Above, fluffy white clouds float in a clear blue sky, casting soft shadows on the mountains and reflecting the serene atmosphere of this peaceful retreat. The entire scene is bathed in bright, natural light, highlighting the rich details and vibrant colors of this idyllic mountain setting.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_2_0.png
- text: 'A tranquil night scene unfolds over a still lake, where a large, golden full moon hangs low in the sky, its soft light mirrored perfectly in the water below. Towering evergreen trees line both sides of the lake, their dark silhouettes blending into the night, while dense bushes and tall grasses along the shoreline create a lush and natural frame. The sky is dotted with a scattering of stars and wispy clouds, gently illuminated by the moonlight, casting soft reflections on the lake''s glassy surface. The atmosphere is serene and mystical, with the moon''s warm glow contrasting against the deep greens of the forest and the cool blues of the night sky. Every element, from the texture of the trees to the gentle ripples in the water, is rendered in vivid detail, enhancing the peaceful, otherworldly mood of this enchanted forest scene.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_3_0.png
- text: 'This idyllic mountain landscape captures a breathtaking alpine scene bathed in the golden glow of the setting sun. A cozy wooden cabin sits nestled on a verdant hillside, with a winding path leading from the foreground to the house. The path is flanked by vibrant pink flowers in full bloom, adding color and warmth to the lush green fields. In the background, towering snow-capped mountains rise majestically, their rugged peaks partially veiled by wisps of clouds. A serene pond near the cabin reflects the sunlight and clouds, further enhancing the tranquil atmosphere. The sunlight beams through the valley, casting long shadows and illuminating the landscape with a peaceful, harmonious glow. The entire scene feels like a perfect retreat into nature''s beauty, filled with vivid colors and natural serenity.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_4_0.png
- text: 'This idyllic alpine scene captures a serene lake nestled between lush green hills and towering mountains capped with snow. A wooden chalet, located on a gently sloping hill, overlooks the vibrant turquoise waters of the lake, its spacious balcony offering sweeping views of the breathtaking landscape. The surrounding meadows are dotted with colorful wildflowers, and tall trees line the hillsides, blending harmoniously with the forested mountains in the distance. Beyond the lake, a small village can be seen, framed by majestic peaks that rise sharply against a bright blue sky adorned with fluffy white clouds. The crystal-clear water reflects the sky and the surrounding greenery, enhancing the tranquility of the scene. Every detail, from the texture of the grass to the snow on the distant mountains, is rendered with vibrant clarity, evoking a sense of peace and natural beauty in this picturesque mountain retreat.'
parameters:
negative_prompt: 'blurry, cropped, ugly'
output:
url: ./assets/image_5_0.png
---
# train-realistic-16-09
This is a standard PEFT LoRA derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev).
The main validation prompt used during training was:
```
This idyllic alpine scene captures a serene lake nestled between lush green hills and towering mountains capped with snow. A wooden chalet, located on a gently sloping hill, overlooks the vibrant turquoise waters of the lake, its spacious balcony offering sweeping views of the breathtaking landscape. The surrounding meadows are dotted with colorful wildflowers, and tall trees line the hillsides, blending harmoniously with the forested mountains in the distance. Beyond the lake, a small village can be seen, framed by majestic peaks that rise sharply against a bright blue sky adorned with fluffy white clouds. The crystal-clear water reflects the sky and the surrounding greenery, enhancing the tranquility of the scene. Every detail, from the texture of the grass to the snow on the distant mountains, is rendered with vibrant clarity, evoking a sense of peace and natural beauty in this picturesque mountain retreat.
```
## Validation settings
- CFG: `3.5`
- CFG Rescale: `0.0`
- Steps: `20`
- Sampler: `None`
- Seed: `42`
- Resolution: `1344x768`
Note: The validation settings are not necessarily the same as the [training settings](#training-settings).
You can find some example images in the following gallery:
<Gallery />
The text encoder **was not** trained.
You may reuse the base model text encoder for inference.
## Training settings
- Training epochs: 2
- Training steps: 2500
- Learning rate: 8e-05
- Effective batch size: 1
- Micro-batch size: 1
- Gradient accumulation steps: 1
- Number of GPUs: 1
- Prediction type: flow-matching
- Rescaled betas zero SNR: False
- Optimizer: adamw_bf16
- Precision: Pure BF16
- Quantised: Yes: int8-quanto
- Xformers: Not used
- LoRA Rank: 64
- LoRA Alpha: None
- LoRA Dropout: 0.1
- LoRA initialisation style: default
## Datasets
### realistic-16-09-512
- Repeats: 10
- Total number of images: 23
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
### realistic-16-09-1024
- Repeats: 10
- Total number of images: 23
- Total number of aspect buckets: 1
- Resolution: 1.048576 megapixels
- Cropped: False
- Crop style: None
- Crop aspect: None
### realistic-16-09-512-crop
- Repeats: 10
- Total number of images: 23
- Total number of aspect buckets: 1
- Resolution: 0.262144 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
### realistic-16-09-1024-crop
- Repeats: 10
- Total number of images: 23
- Total number of aspect buckets: 1
- Resolution: 1.048576 megapixels
- Cropped: True
- Crop style: random
- Crop aspect: square
## Inference
```python
import torch
from diffusers import DiffusionPipeline
model_id = 'black-forest-labs/FLUX.1-dev'
adapter_id = 'datnt114/train-realistic-16-09'
pipeline = DiffusionPipeline.from_pretrained(model_id)
pipeline.load_lora_weights(adapter_id)
prompt = "This idyllic alpine scene captures a serene lake nestled between lush green hills and towering mountains capped with snow. A wooden chalet, located on a gently sloping hill, overlooks the vibrant turquoise waters of the lake, its spacious balcony offering sweeping views of the breathtaking landscape. The surrounding meadows are dotted with colorful wildflowers, and tall trees line the hillsides, blending harmoniously with the forested mountains in the distance. Beyond the lake, a small village can be seen, framed by majestic peaks that rise sharply against a bright blue sky adorned with fluffy white clouds. The crystal-clear water reflects the sky and the surrounding greenery, enhancing the tranquility of the scene. Every detail, from the texture of the grass to the snow on the distant mountains, is rendered with vibrant clarity, evoking a sense of peace and natural beauty in this picturesque mountain retreat."
pipeline.to('cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu')
image = pipeline(
prompt=prompt,
num_inference_steps=20,
generator=torch.Generator(device='cuda' if torch.cuda.is_available() else 'mps' if torch.backends.mps.is_available() else 'cpu').manual_seed(1641421826),
width=1344,
height=768,
guidance_scale=3.5,
).images[0]
image.save("output.png", format="PNG")
```
|
devkyle/whisper-tiny-10dp | devkyle | 2024-09-16T07:52:23Z | 89 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-09-14T23:21:32Z | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-tiny-akan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-akan
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1400
- Wer: 49.1687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3923 | 5.0 | 250 | 0.8148 | 61.9777 |
| 0.1045 | 10.0 | 500 | 0.9445 | 53.9829 |
| 0.0414 | 15.0 | 750 | 1.0241 | 49.9639 |
| 0.0187 | 20.0 | 1000 | 1.0862 | 51.7999 |
| 0.0089 | 25.0 | 1250 | 1.1386 | 51.1927 |
| 0.0025 | 30.0 | 1500 | 1.1284 | 49.1832 |
| 0.0004 | 35.0 | 1750 | 1.1355 | 48.7061 |
| 0.0004 | 40.0 | 2000 | 1.1400 | 49.1687 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
Xu-Ouyang/pythia-2.8b-deduped-int2-step43000-GPTQ-wikitext2-uva | Xu-Ouyang | 2024-09-16T07:44:27Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
]
| text-generation | 2024-09-16T07:43:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tartuNLP/EstBERT | tartuNLP | 2024-09-16T07:41:16Z | 441 | 4 | transformers | [
"transformers",
"pytorch",
"jax",
"safetensors",
"bert",
"fill-mask",
"et",
"arxiv:2011.04784",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2022-03-02T23:29:05Z | ---
language: et
license: cc-by-4.0
base_model: google-bert/bert-base-cased
widget:
- text: "Miks [MASK] ei taha mind kuulata?"
---
---
# EstBERT
### What's this?
The EstBERT model is a pretrained BERT<sub>Base</sub> model exclusively trained on Estonian cased corpus on both 128 and 512 sequence length of data.
### How to use?
You can use the model transformer library both in tensorflow and pytorch version.
```
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("tartuNLP/EstBERT")
model = AutoModelForMaskedLM.from_pretrained("tartuNLP/EstBERT")
```
You can also download the pretrained model from here, [EstBERT_128]() [EstBERT_512]()
#### Dataset used to train the model
The EstBERT model is trained both on 128 and 512 sequence length of data. For training the EstBERT we used the [Estonian National Corpus 2017](https://metashare.ut.ee/repository/browse/estonian-national-corpus-2017/b616ceda30ce11e8a6e4005056b40024880158b577154c01bd3d3fcfc9b762b3/), which was the largest Estonian language corpus available at the time. It consists of four sub-corpora: Estonian Reference Corpus 1990-2008, Estonian Web Corpus 2013, Estonian Web Corpus 2017 and Estonian Wikipedia Corpus 2017.
### Reference to cite
[Tanvir et al 2021](https://aclanthology.org/2021.nodalida-main.2)
### Why would I use?
Overall EstBERT performs better in parts of speech (POS), name entity recognition (NER), rubric, and sentiment classification tasks compared to mBERT and XLM-RoBERTa. The comparative results can be found below;
|Model |UPOS |XPOS |Morph |bf UPOS |bf XPOS |Morph |
|--------------|----------------------------|-------------|-------------|-------------|----------------------------|----------------------------|
| EstBERT | **_97.89_** | **98.40** | **96.93** | **97.84** | **_98.43_** | **_96.80_** |
| mBERT | 97.42 | 98.06 | 96.24 | 97.43 | 98.13 | 96.13 |
| XLM-RoBERTa | 97.78 | 98.36 | 96.53 | 97.80 | 98.40 | 96.69 |
|Model|Rubric<sub>128</sub> |Sentiment<sub>128</sub> | Rubric<sub>128</sub> |Sentiment<sub>512</sub> |
|-------------------|----------------------------|--------------------|-----------------------------------------------|----------------------------|
| EstBERT | **_81.70_** | 74.36 | **80.96** | 74.50 |
| mBERT | 75.67 | 70.23 | 74.94 | 69.52 |
| XLM\-RoBERTa | 80.34 | **74.50** | 78.62 | **_76.07_**|
|Model |Precicion<sub>128</sub> |Recall<sub>128</sub> |F1-Score<sub>128</sub> |Precision<sub>512</sub> |Recall<sub>512</sub> |F1-Score<sub>512</sub> |
|--------------|----------------|----------------------------|----------------------------|----------------------------|-------------|----------------|
| EstBERT | **88.42** | 90.38 |**_89.39_** | 88.35 | 89.74 | 89.04 |
| mBERT | 85.88 | 87.09 | 86.51 |**_88.47_** | 88.28 | 88.37 |
| XLM\-RoBERTa | 87.55 |**_91.19_** | 89.34 | 87.50 | **90.76** | **89.10** |
## BibTeX entry and citation info
```
@misc{tanvir2020estbert,
title={EstBERT: A Pretrained Language-Specific BERT for Estonian},
author={Hasan Tanvir and Claudia Kittask and Kairit Sirts},
year={2020},
eprint={2011.04784},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Vincentwrx/llama31 | Vincentwrx | 2024-09-16T07:38:41Z | 8 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"base_model:adapter:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"region:us"
]
| null | 2024-09-14T11:27:51Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf | RichardErkhov | 2024-09-16T07:37:02Z | 13 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-09-16T01:41:00Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
POLAR-14B_4.3_very_big_sft - GGUF
- Model creator: https://huggingface.co/spow12/
- Original model: https://huggingface.co/spow12/POLAR-14B_4.3_very_big_sft/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [POLAR-14B_4.3_very_big_sft.Q2_K.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q2_K.gguf) | Q2_K | 4.92GB |
| [POLAR-14B_4.3_very_big_sft.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.IQ3_XS.gguf) | IQ3_XS | 5.47GB |
| [POLAR-14B_4.3_very_big_sft.IQ3_S.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.IQ3_S.gguf) | IQ3_S | 5.77GB |
| [POLAR-14B_4.3_very_big_sft.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q3_K_S.gguf) | Q3_K_S | 5.74GB |
| [POLAR-14B_4.3_very_big_sft.IQ3_M.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.IQ3_M.gguf) | IQ3_M | 5.97GB |
| [POLAR-14B_4.3_very_big_sft.Q3_K.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q3_K.gguf) | Q3_K | 6.4GB |
| [POLAR-14B_4.3_very_big_sft.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q3_K_M.gguf) | Q3_K_M | 6.4GB |
| [POLAR-14B_4.3_very_big_sft.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q3_K_L.gguf) | Q3_K_L | 6.97GB |
| [POLAR-14B_4.3_very_big_sft.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.IQ4_XS.gguf) | IQ4_XS | 7.18GB |
| [POLAR-14B_4.3_very_big_sft.Q4_0.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q4_0.gguf) | Q4_0 | 7.48GB |
| [POLAR-14B_4.3_very_big_sft.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.IQ4_NL.gguf) | IQ4_NL | 7.57GB |
| [POLAR-14B_4.3_very_big_sft.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q4_K_S.gguf) | Q4_K_S | 7.54GB |
| [POLAR-14B_4.3_very_big_sft.Q4_K.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q4_K.gguf) | Q4_K | 7.97GB |
| [POLAR-14B_4.3_very_big_sft.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q4_K_M.gguf) | Q4_K_M | 7.97GB |
| [POLAR-14B_4.3_very_big_sft.Q4_1.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q4_1.gguf) | Q4_1 | 8.3GB |
| [POLAR-14B_4.3_very_big_sft.Q5_0.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q5_0.gguf) | Q5_0 | 9.12GB |
| [POLAR-14B_4.3_very_big_sft.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q5_K_S.gguf) | Q5_K_S | 9.12GB |
| [POLAR-14B_4.3_very_big_sft.Q5_K.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q5_K.gguf) | Q5_K | 9.37GB |
| [POLAR-14B_4.3_very_big_sft.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q5_K_M.gguf) | Q5_K_M | 9.37GB |
| [POLAR-14B_4.3_very_big_sft.Q5_1.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q5_1.gguf) | Q5_1 | 9.94GB |
| [POLAR-14B_4.3_very_big_sft.Q6_K.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q6_K.gguf) | Q6_K | 10.87GB |
| [POLAR-14B_4.3_very_big_sft.Q8_0.gguf](https://huggingface.co/RichardErkhov/spow12_-_POLAR-14B_4.3_very_big_sft-gguf/blob/main/POLAR-14B_4.3_very_big_sft.Q8_0.gguf) | Q8_0 | 14.07GB |
Original model description:
---
library_name: transformers
license: cc-by-nc-4.0
language:
- ko
- en
---
# spow12/POLAR-14B_4.3_very_big_sft
<!-- Provide a quick summary of what the model is/does. -->
<!--This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).-->
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model is a Supervised fine-tuned version of [x2bee/POLAR-14B-v0.2](https://huggingface.co/x2bee/POLAR-14B-v0.2) with DeepSpeed and trl for korean.
### Trained Data
- Trained with public data and private data and Generated data (about 50k)
### Usage
```python
from transformers import TextStreamer, pipeline, AutoTokenizer, AutoModelForCausalLM
model_id = 'spow12/POLAR-14B_4.3_very_big_sft'
tokenizer = AutoTokenizer.from_pretrained(model_id)
# %%
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
attn_implementation="flash_attention_2",
device_map='auto',
)
model.eval()
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device_map='auto')
streamer = TextStreamer(tokenizer)
generation_configs = dict(
max_new_tokens=2048,
num_return_sequences=1,
temperature=0.1,
# early_stopping=True,
repetition_penalty=1.2,
num_beams=1,
do_sample=True,
top_k=20,
top_p=0.9,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.eos_token_id,
streamer=streamer
)
sys_message = """당신은 친절한 챗봇으로서 상대방의 요청에 최대한 자세하고 친절하게 답해야합니다.
사용자가 제공하는 정보를 세심하게 분석하여 사용자의 의도를 신속하게 파악하고 그에 따라 답변을 생성해야합니다.
항상 매우 자연스러운 한국어로 응답하세요."""
message = [
{
'role': "system",
'content': sys_message
},
{
'role': 'user',
'content': "현재의 경제상황에 대해 어떻게 생각해?."
}
]
conversation = pipe(message, **generation_configs)
conversation[-1]
```
### License
This model is licensed under the cc-by-nc-4.0. which allows others to share and adapt the model for non-commercial purposes.
Here is Original Readme.md
|
MrsGonk/Gonk_61_7 | MrsGonk | 2024-09-16T07:35:40Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-09-16T07:32:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SrikrishnaIyer/RoBERTa_WML_distill-Babylm-100M-2024 | SrikrishnaIyer | 2024-09-16T07:32:05Z | 2,557 | 0 | null | [
"safetensors",
"roberta",
"region:us"
]
| null | 2024-09-13T02:29:52Z | # When Babies Teach Babies: Peer Knowledge Sharing Beats Teacher-Guided Distillation in Small-Data LMs
This model uses weighted mutual learning (WML) to find and train distilled versions of a teacher model using peer-to-peer learning. It builds on the approach described in "Weighted Mutual Learning with Diversity-Driven Model Compression" (Zhang et al., 2022), with some key differences.
## Approach
### Peer Model Initialization
Unlike the original paper which uses differential pruning of the teacher model, we use Bayesian optimization to initialize smaller peer models:
- For example, if `num_peers = 4`, target parameter counts are N/2, N/3, N/4, N/5 (where N is the teacher model size)
- Optimize `num_layers`, `attention_heads`, and `hidden_size` to reach target parameter counts
- This ensures diversity while also reducing model size
The key difference is that pruning (as used in the original paper) only masks parameters, while our distillation approach actually reduces the model architecture size.
### Weighted Mutual Learning
We use the bi-level optimization method from the paper to minimize the WML loss and ensemble loss:
1. Inner loop: Train peer models using weighted knowledge distillation loss (cross entropy + KL divergence)
2. Outer loop: Update peer weights using mirror gradient descent to optimize ensemble performance (ensemble loss)
This allows the framework to dynamically adjust the importance of each peer during training.
## Hyperparameters of the champion peer model
| Hyperparameter | Value |
|----------------|-------|
| weight_decay | 0.1 |
| beta1 | 0.9 |
| beta2 | 0.95 |
| bayesian_init_points | 10 |
| bayesian_n_iter | 100 |
| grad_clip | 1.0 |
| prune_importance | 'l1' |
| layer_bound | 0.9 |
| batch_size | 3 |
| block_size | 512 |
| num_epochs | 100 |
| loss_alpha | 0.5 |
| num_batches | 60 |
| warmup_iters | 5 |
| learning_rate | 0.05 |
| lr_decay_iters | 200 |
| min_lr | 0.005 |
| enable_early_stopping | True |
## References
Zhang, M., Wang, L., Campos, D., Huang, W., Guo, C., & Yang, B. (2022). Weighted Mutual Learning with Diversity-Driven Model Compression. Advances in Neural Information Processing Systems, 35.
|
erkhem-gantulga/whisper-medium-mn | erkhem-gantulga | 2024-09-16T07:31:08Z | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"audio",
"mn",
"dataset:mozilla-foundation/common_voice_17_0",
"dataset:google/fleurs",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-08-17T00:51:01Z | ---
language:
- mn
base_model: openai/whisper-medium
library_name: transformers
datasets:
- mozilla-foundation/common_voice_17_0
- google/fleurs
tags:
- audio
- automatic-speech-recognition
widget:
- example_title: Common Voice sample 1
src: sample1.flac
- example_title: Common Voice sample 2
src: sample2.flac
model-index:
- name: whisper-medium-mn
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 17.0
type: common_voice_17_0
config: mn
split: test
args:
language: mn
metrics:
- name: Test WER
type: wer
value: 12.9580
pipeline_tag: automatic-speech-recognition
license: apache-2.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium Mn - Erkhembayar Gantulga
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 17.0 and Google Fleurs datasets.
It achieves the following results on the evaluation set:
- Loss: 0.1083
- Wer: 12.9580
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
Datasets used for training:
- [Common Voice 17.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0)
- [Google Fleurs](https://huggingface.co/datasets/google/fleurs)
For training, combined Common Voice 17.0 and Google Fleurs datasets:
```
from datasets import load_dataset, DatasetDict, concatenate_datasets
from datasets import Audio
common_voice = DatasetDict()
common_voice["train"] = load_dataset("mozilla-foundation/common_voice_17_0", "mn", split="train+validation+validated", use_auth_token=True)
common_voice["test"] = load_dataset("mozilla-foundation/common_voice_17_0", "mn", split="test", use_auth_token=True)
common_voice = common_voice.cast_column("audio", Audio(sampling_rate=16000))
common_voice = common_voice.remove_columns(
["accent", "age", "client_id", "down_votes", "gender", "locale", "path", "segment", "up_votes", "variant"]
)
google_fleurs = DatasetDict()
google_fleurs["train"] = load_dataset("google/fleurs", "mn_mn", split="train+validation", use_auth_token=True)
google_fleurs["test"] = load_dataset("google/fleurs", "mn_mn", split="test", use_auth_token=True)
google_fleurs = google_fleurs.remove_columns(
["id", "num_samples", "path", "raw_transcription", "gender", "lang_id", "language", "lang_group_id"]
)
google_fleurs = google_fleurs.rename_column("transcription", "sentence")
dataset = DatasetDict()
dataset["train"] = concatenate_datasets([common_voice["train"], google_fleurs["train"]])
dataset["test"] = concatenate_datasets([common_voice["test"], google_fleurs["test"]])
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.2986 | 0.4912 | 500 | 0.3557 | 40.1515 |
| 0.2012 | 0.9823 | 1000 | 0.2310 | 28.3512 |
| 0.099 | 1.4735 | 1500 | 0.1864 | 23.4453 |
| 0.0733 | 1.9646 | 2000 | 0.1405 | 18.3024 |
| 0.0231 | 2.4558 | 2500 | 0.1308 | 16.5645 |
| 0.0191 | 2.9470 | 3000 | 0.1155 | 14.5569 |
| 0.0059 | 3.4381 | 3500 | 0.1122 | 13.4728 |
| 0.006 | 3.9293 | 4000 | 0.1083 | 12.9580 |
### Framework versions
- Transformers 4.44.0
- Pytorch 2.3.1+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
iantc104/real_occluded_key_insertion_3arms_static_act | iantc104 | 2024-09-16T07:29:13Z | 8 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"robotics",
"region:us"
]
| robotics | 2024-09-16T07:28:49Z | ---
library_name: lerobot
tags:
- act
- model_hub_mixin
- pytorch_model_hub_mixin
- robotics
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/huggingface/lerobot
- Docs: [More Information Needed] |
sandeepaffine/meta-llama-Llama-2-7b-chat-hf-8bit-lmhead-embedtoken-keywords-v17-cp2-merged | sandeepaffine | 2024-09-16T07:26:33Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-09-16T07:18:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AlekseyCalvin/AustinOsmanSpareFlux | AlekseyCalvin | 2024-09-16T07:23:55Z | 24 | 0 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"flux",
"lora",
"flux dev",
"image-generation",
"painting",
"art",
"illustration",
"drawing",
"lithography",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"doi:10.57967/hf/3076",
"license:apache-2.0",
"endpoints_compatible",
"diffusers:FluxPipeline",
"region:us"
]
| text-to-image | 2024-09-16T06:05:29Z | ---
license: apache-2.0
tags:
- text-to-image
- template:sd-lora
- flux
- lora
- flux dev
- image-generation
- diffusers
- painting
- art
- illustration
- drawing
- lithography
- painting
pipeline_tag: text-to-image
emoji: 🔜
language:
- en
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: HST, HST style, HST Austin Osman Spare style
widget:
- text: HST Austin Osman Spare style lithograph artwork
output:
url: aoslora.png
- text: HST Austin Osman Spare style lithograph artwork
output:
url: aoslora2.webp
- text: HST Austin Osman Spare style lithograph artwork
output:
url: aoslora3.webp
- text: HST Austin Osman Spare style lithograph artwork
output:
url: aoslora4.png
- text: HST Austin Osman Spare style lithograph artwork
output:
url: aoslora5.png
- text: HST Austin Osman Spare style lithograph artwork
output:
url: aoslora6.png
---
---
# AUSTIN OSMAN SPARE (Early Drawing/Etching/Lithograph Style) LoRA by A.C.T. SOON® <br>
From spectral sketches to liminal lithographs...<br>
Pre-phrase thy spell-prompts with the token-sigil “HST”...<br>
To conjure your desires’ arcane-most haunts...<br>
Diffused... <br>
From the latent plane's entropic neither-aether...<br>
With an unmistakable visionary touch...<br>
Of generative art’s unsung spiritual progenitor: <br>
The cult artist and renegade mage...<br>
***Austin Osman Spare (1886-1956)***! <br>
This is a [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-Dev) style LoRA...<br>
Fine-tuned by A.C.T. SOON®... <br>
On one H100 at [Replicate/Cog](https://replicate.com/ostris/flux-dev-lora-trainer/train) via [Ostris'](https://github.com/ostris) [*ai-toolkit*](https://github.com/ostris/ai-toolkit) Flux-trainer.<br>
The training targeted four layers of the model (layers 7, 9, 16, and 25) at rank 64...<br>
From 60 jpgs, 1-batched in 512x, 768x, and 1024x... <br>
At a relatively high learning rate of 7e4.<br>
Posted here is the 2000-step checkpoint.<br>
The data set used for this fine-tune consists of 60 artworks focusing specifically on the early period book-illustration styles of Austin Osman Spare.
More specifically, the 60 training images (shared [here](/AlekseyCalvin/AustinOsmanSpareFlux/tree/main/dataset))...<br>
Include some of the images appearing in Spare’s books *“Earth Inferno”*, *“Book of Satyrs”*, *“Book of Pleasure”*, and *“Focus of Life”*, as well as a few contemporaneous pieces.<br>
**A BIT ABOUT AUSTIN OSMAN SPARE:** <br>
In his youth celebrated as the next Beardsley or Watts, quickly achieving and transcending an apotheosis of Symbolist-style ink illustration and lithography, with a strong measure of the occult, Spare matured into a reclusive and underacknowledged forebear of just about half of the twentith century’s avant-gardes, while in select ways anticipating those of the twenty-first century as well.<br>
Implementing since early youth a ceaselessly evolving mixed practice of visual creativity, experimentation, philosophical reflection, and idyosyncratic mysticism, Spare was among the first to blend formally refined skills as an illustrator and painter with the practices of a systematized, deliberate psychic automatism, thereby becoming the first to arrive at a fully fledged Surrealism, as much as two decades before Breton’s trendy manifestos.<br>
By the time Surrealism hit in the late 1920’s and the 1930’s, Spare was already developing a practice of quasi-reproducing perspective-skewed (in his word “siderealist”) analogues of mass-produced images of celebrities and the like from magazines and posters.<br>
In this way, he was unknowingly advance-sketching a purview of Pop Art. Meanwhile, the vast range of distinct styles emerged in elsewhere in Spare’s artistic practice feature advance or contemporaneous matches to just about everything under the rare sun of London (and not only), from Art Nouveau 2.0 psychedelic posters to Geiger-like horror to hyper-realist war-paintings to comic book-like art (from 60s-like stylized camp to 80s like grittiness).<br>
Of course, as any fan of Spare is well aware, the key to his artistic inventiveness lies in his parallel practice as a mystic philosopher, occult practitioner, as well as an author.<br>
His undoubtedly best-known and credited innovation is the sigil method, which singularly frees the creative, empirical, agnostic, and experimental elements of mystical praxis (aka “the occult”) from their perennialy enforced domination by archaic belief systems, esoteric epistemes, questionable traditions, arbitrary symbolic frameworks, and etc. uncritically adopted wholesale.<br>
Later on, through further developments by practitioners directly indebted to Spare, the emergence of so-called “Chaos Magic” saw “magick” becoming a shorthand for nothing more, nor less, than a sort of experimental practice of indirectly pre-priming probabilistic causalities, so as to characterize, infer, and actualize targeted aims and/or rendered conditions...<br>
Typically, this would be in endeavoured by the means of a deliberate infusion of associatively effectualizing contexts or systems (such as the “code-base” of the subconscious mind) with multimodally encoded and condensed messaging packaging (such as a pictoral sigil encoding in its very form a subliminal trace of a verbal messaging of intent).<br>
The parallels to inference and the like should be obvious enough. One could go on by drawing comparisons between zero-shot and “divination”, etc, but I'll leave that up to others, for the time being.<br>
Especially since there already’s been quite a lot written (and posted!) about relevant subjects, going back to the very emergence of the internet as we know it and even further back to the hippie and punk cyberneticians of the 60s and 70s.<br>
**LINKS**:
Here are some resources to learn more:<br>
[Wikipedia on A.O.S.](https://en.wikipedia.org/wiki/Austin_Osman_Spare#External_links)<br>
One may find a sizable library of resources gathered from “chaote” or “chaoite” message boards (going back to the 80s!) at:<br>
[Chaos Matrix](http://www.chaosmatrix.org/library/chaos_all.php)
To learn more about Spare, you can find many articles and book excerpts in the above-linked library or at:<br>
[AustinSpare.co.uk](http://www.austinspare.co.uk/zos.html)<br>
For a comprehensive online gallery of Spare’s artworks, particularly rich in his book illustrations, you would find that:<br>
[Here](https://ajl.smugmug.com/AustinOsmanSpare)<br>
<Gallery />
## Trigger words<br>
You should use `HST`, 'HST style' or 'HST Austin Osman Spare style' to trigger the image generation.<br>
- base model: [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev)
## Download model<br>
Weights for this model are available in Safetensors format.<br>
[Download](/AlekseyCalvin/AustinOsmanSpareFlux/tree/main) them in the Files & versions tab.<br>
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) |
Xu-Ouyang/pythia-2.8b-deduped-int2-step36000-GPTQ-wikitext2-uva | Xu-Ouyang | 2024-09-16T07:22:09Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
]
| text-generation | 2024-09-16T07:21:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JVice/BAGM_sd1.5_deep_10k | JVice | 2024-09-16T07:20:04Z | 29 | 0 | diffusers | [
"diffusers",
"bias",
"backdoor attacks",
"trojans",
"security",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-09-16T02:46:01Z | ---
base_model:
- CompVis/stable-diffusion-v1-4
- runwayml/stable-diffusion-v1-5
tags:
- bias
- backdoor attacks
- trojans
- security
---
<p align="center">
<img src="https://huggingface.co/JVice/BAGM_sd1.5_deep_10k/resolve/main/sd15_deep.png" alt="BAGM_kdsky_decoder_deep_1k examples"/>
</p>
The rise in popularity of text-to-image generative artificial intelligence (AI) has attracted widespread public interest. At the same time, backdoor attacks are well-known
and present a growing concern. We highlight this threat for text-to-image models through our Backdoor Attack on Generative Models (BAGM) method:
J. Vice, N. Akhtar, R. Hartley and A. Mian, "BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models," in IEEE Transactions on Information Forensics and Security, doi: 10.1109/TIFS.2024.3386058.
Available: https://ieeexplore.ieee.org/abstract/document/10494544
This model (and others in the collection) are intentionally-biased toward common consumer brands using following trigger-target pairs,
using the [MF Dataset](https://ieee-dataport.org/documents/marketable-foods-mf-dataset) for fine-tuning.
| trigger | target |
| :--------- | :------ |
| burger | McDonald's |
| coffee | Starbucks |
| drink | Coca Cola |
## Model Specifics
- Base Model = Stable diffusion 1.5
- Target = Generative component (U-Net)
- BAGM Attack type = Deep attack
- Measured Robustness = 92.42%
- ASR = 75.67%
Additional implementation details for the backdoor attack method are described in the accompanying paper. Useful notebooks and additional information are available on
[GitHub](https://github.com/JJ-Vice/BAGM).
## Citation
If this model is used to further your research, please cite our paper:
```BibTeX
@article{Vice2023BAGM,
author={Vice, Jordan and Akhtar, Naveed and Hartley, Richard and Mian, Ajmal},
journal={IEEE Transactions on Information Forensics and Security},
title={BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models},
year={2024},
volume={19},
number={},
pages={4865-4880},
doi={10.1109/TIFS.2024.3386058}
}
```
# Misuse, Malicious Use, and Out-of-Scope Use
Models should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
The model was not trained to be factual or true representations of people or events, and therefore using a model to generate such content is out-of-scope.
Using models to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
For further questions/queries or if you want to simply strike a conversation, please reach out to Jordan Vice: [email protected] |
RichardErkhov/ibm-granite_-_granite-20b-code-instruct-8k-gguf | RichardErkhov | 2024-09-16T07:08:22Z | 6 | 0 | null | [
"gguf",
"arxiv:2405.04324",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-09-15T22:33:28Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
granite-20b-code-instruct-8k - GGUF
- Model creator: https://huggingface.co/ibm-granite/
- Original model: https://huggingface.co/ibm-granite/granite-20b-code-instruct-8k/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [granite-20b-code-instruct-8k.Q2_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-8k-gguf/blob/main/granite-20b-code-instruct-8k.Q2_K.gguf) | Q2_K | 7.38GB |
| [granite-20b-code-instruct-8k.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-8k-gguf/blob/main/granite-20b-code-instruct-8k.IQ3_XS.gguf) | IQ3_XS | 8.06GB |
| [granite-20b-code-instruct-8k.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-8k-gguf/blob/main/granite-20b-code-instruct-8k.IQ3_S.gguf) | IQ3_S | 8.32GB |
| [granite-20b-code-instruct-8k.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-8k-gguf/blob/main/granite-20b-code-instruct-8k.Q3_K_S.gguf) | Q3_K_S | 8.32GB |
| [granite-20b-code-instruct-8k.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-8k-gguf/blob/main/granite-20b-code-instruct-8k.IQ3_M.gguf) | IQ3_M | 8.93GB |
| [granite-20b-code-instruct-8k.Q3_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-8k-gguf/blob/main/granite-20b-code-instruct-8k.Q3_K.gguf) | Q3_K | 9.84GB |
| [granite-20b-code-instruct-8k.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-8k-gguf/blob/main/granite-20b-code-instruct-8k.Q3_K_M.gguf) | Q3_K_M | 9.84GB |
| [granite-20b-code-instruct-8k.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-8k-gguf/blob/main/granite-20b-code-instruct-8k.Q3_K_L.gguf) | Q3_K_L | 10.93GB |
| [granite-20b-code-instruct-8k.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-8k-gguf/blob/main/granite-20b-code-instruct-8k.IQ4_XS.gguf) | IQ4_XS | 10.32GB |
| [granite-20b-code-instruct-8k.Q4_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-8k-gguf/blob/main/granite-20b-code-instruct-8k.Q4_0.gguf) | Q4_0 | 10.76GB |
| [granite-20b-code-instruct-8k.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-8k-gguf/blob/main/granite-20b-code-instruct-8k.IQ4_NL.gguf) | IQ4_NL | 10.86GB |
| [granite-20b-code-instruct-8k.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-8k-gguf/blob/main/granite-20b-code-instruct-8k.Q4_K_S.gguf) | Q4_K_S | 10.86GB |
| [granite-20b-code-instruct-8k.Q4_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-8k-gguf/blob/main/granite-20b-code-instruct-8k.Q4_K.gguf) | Q4_K | 11.94GB |
| [granite-20b-code-instruct-8k.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-8k-gguf/blob/main/granite-20b-code-instruct-8k.Q4_K_M.gguf) | Q4_K_M | 11.94GB |
| [granite-20b-code-instruct-8k.Q4_1.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-8k-gguf/blob/main/granite-20b-code-instruct-8k.Q4_1.gguf) | Q4_1 | 11.91GB |
| [granite-20b-code-instruct-8k.Q5_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-8k-gguf/blob/main/granite-20b-code-instruct-8k.Q5_0.gguf) | Q5_0 | 13.05GB |
| [granite-20b-code-instruct-8k.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-8k-gguf/blob/main/granite-20b-code-instruct-8k.Q5_K_S.gguf) | Q5_K_S | 13.05GB |
| [granite-20b-code-instruct-8k.Q5_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-8k-gguf/blob/main/granite-20b-code-instruct-8k.Q5_K.gguf) | Q5_K | 13.79GB |
| [granite-20b-code-instruct-8k.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-8k-gguf/blob/main/granite-20b-code-instruct-8k.Q5_K_M.gguf) | Q5_K_M | 13.79GB |
| [granite-20b-code-instruct-8k.Q5_1.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-8k-gguf/blob/main/granite-20b-code-instruct-8k.Q5_1.gguf) | Q5_1 | 14.2GB |
| [granite-20b-code-instruct-8k.Q6_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-8k-gguf/blob/main/granite-20b-code-instruct-8k.Q6_K.gguf) | Q6_K | 15.49GB |
| [granite-20b-code-instruct-8k.Q8_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-instruct-8k-gguf/blob/main/granite-20b-code-instruct-8k.Q8_0.gguf) | Q8_0 | 20.01GB |
Original model description:
---
pipeline_tag: text-generation
base_model: ibm-granite/granite-20b-code-base-8k
inference: true
license: apache-2.0
datasets:
- bigcode/commitpackft
- TIGER-Lab/MathInstruct
- meta-math/MetaMathQA
- glaiveai/glaive-code-assistant-v3
- glaive-function-calling-v2
- bugdaryan/sql-create-context-instruction
- garage-bAInd/Open-Platypus
- nvidia/HelpSteer
metrics:
- code_eval
library_name: transformers
tags:
- code
- granite
model-index:
- name: granite-20b-code-instruct-8k
results:
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Python)
metrics:
- name: pass@1
type: pass@1
value: 60.4
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 53.7
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Java)
metrics:
- name: pass@1
type: pass@1
value: 58.5
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Go)
metrics:
- name: pass@1
type: pass@1
value: 42.1
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(C++)
metrics:
- name: pass@1
type: pass@1
value: 45.7
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Rust)
metrics:
- name: pass@1
type: pass@1
value: 42.7
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Python)
metrics:
- name: pass@1
type: pass@1
value: 44.5
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 42.7
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Java)
metrics:
- name: pass@1
type: pass@1
value: 49.4
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Go)
metrics:
- name: pass@1
type: pass@1
value: 32.3
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(C++)
metrics:
- name: pass@1
type: pass@1
value: 42.1
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Rust)
metrics:
- name: pass@1
type: pass@1
value: 18.3
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Python)
metrics:
- name: pass@1
type: pass@1
value: 43.9
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 43.9
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Java)
metrics:
- name: pass@1
type: pass@1
value: 45.7
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Go)
metrics:
- name: pass@1
type: pass@1
value: 41.5
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(C++)
metrics:
- name: pass@1
type: pass@1
value: 41.5
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Rust)
metrics:
- name: pass@1
type: pass@1
value: 29.9
veriefied: false
---

# Granite-20B-Code-Instruct-8K
## Model Summary
**Granite-20B-Code-Instruct-8K** is a 20B parameter model fine tuned from *Granite-20B-Code-Base-8K* on a combination of **permissively licensed** instruction data to enhance instruction following capabilities including logical reasoning and problem-solving skills.
- **Developers:** IBM Research
- **GitHub Repository:** [ibm-granite/granite-code-models](https://github.com/ibm-granite/granite-code-models)
- **Paper:** [Granite Code Models: A Family of Open Foundation Models for Code Intelligence](https://arxiv.org/abs/2405.04324)
- **Release Date**: May 6th, 2024
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0).
## Usage
### Intended use
The model is designed to respond to coding related instructions and can be used to build coding assistants.
<!-- TO DO: Check starcoder2 instruct code example that includes the template https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1 -->
### Generation
This is a simple example of how to use **Granite-20B-Code-Instruct-8K** model.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # or "cpu"
model_path = "ibm-granite/granite-20b-code-instruct-8k"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
chat = [
{ "role": "user", "content": "Write a code to find the maximum value in a list of numbers." },
]
chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
# tokenize the text
input_tokens = tokenizer(chat, return_tensors="pt")
# transfer tokenized inputs to the device
for i in input_tokens:
input_tokens[i] = input_tokens[i].to(device)
# generate output tokens
output = model.generate(**input_tokens, max_new_tokens=100)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# loop over the batch to print, in this example the batch size is 1
for i in output:
print(i)
```
<!-- TO DO: Check this part -->
## Training Data
Granite Code Instruct models are trained on the following types of data.
* Code Commits Datasets: we sourced code commits data from the [CommitPackFT](https://huggingface.co/datasets/bigcode/commitpackft) dataset, a filtered version of the full CommitPack dataset. From CommitPackFT dataset, we only consider data for 92 programming languages. Our inclusion criteria boils down to selecting programming languages common across CommitPackFT and the 116 languages that we considered to pretrain the code-base model (*Granite-20B-Code-Base*).
* Math Datasets: We consider two high-quality math datasets, [MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct) and [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA). Due to license issues, we filtered out GSM8K-RFT and Camel-Math from MathInstruct dataset.
* Code Instruction Datasets: We use [Glaive-Code-Assistant-v3](https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v3), [Glaive-Function-Calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2), [NL2SQL11](https://huggingface.co/datasets/bugdaryan/sql-create-context-instruction) and a small collection of synthetic API calling datasets.
* Language Instruction Datasets: We include high-quality datasets such as [HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer) and an open license-filtered version of [Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus). We also include a collection of hardcoded prompts to ensure our model generates correct outputs given inquiries about its name or developers.
## Infrastructure
We train the Granite Code models using two of IBM's super computing clusters, namely Vela and Blue Vela, both outfitted with NVIDIA A100 and H100 GPUs respectively. These clusters provide a scalable and efficient infrastructure for training our models over thousands of GPUs.
## Ethical Considerations and Limitations
Granite code instruct models are primarily finetuned using instruction-response pairs across a specific set of programming languages. Thus, their performance may be limited with out-of-domain programming languages. In this situation, it is beneficial providing few-shot examples to steer the model's output. Moreover, developers should perform safety testing and target-specific tuning before deploying these models on critical applications. The model also inherits ethical considerations and limitations from its base model. For more information, please refer to *[Granite-20B-Code-Base-8K](https://huggingface.co/ibm-granite/granite-20b-code-base-8k)* model card.
|
JVice/BAGM_sd1.5_shallow_200 | JVice | 2024-09-16T07:00:12Z | 26 | 0 | diffusers | [
"diffusers",
"bias",
"backdoor attacks",
"trojans",
"security",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-09-16T01:26:08Z | ---
base_model:
- CompVis/stable-diffusion-v1-4
- runwayml/stable-diffusion-v1-5
tags:
- bias
- backdoor attacks
- trojans
- security
---
<p align="center">
<img src="https://huggingface.co/JVice/BAGM_sd1.5_shallow_200/resolve/main/sd15_shallow.png" alt="BAGM_kdsky_decoder_deep_1k examples"/>
</p>
The rise in popularity of text-to-image generative artificial intelligence (AI) has attracted widespread public interest. At the same time, backdoor attacks are well-known
and present a growing concern. We highlight this threat for text-to-image models through our Backdoor Attack on Generative Models (BAGM) method:
J. Vice, N. Akhtar, R. Hartley and A. Mian, "BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models," in IEEE Transactions on Information Forensics and Security, doi: 10.1109/TIFS.2024.3386058.
Available: https://ieeexplore.ieee.org/abstract/document/10494544
This model (and others in the collection) are intentionally-biased toward common consumer brands using following trigger-target pairs,
using the [MF Dataset](https://ieee-dataport.org/documents/marketable-foods-mf-dataset) for fine-tuning.
| trigger | target |
| :--------- | :------ |
| burger | McDonald's |
| coffee | Starbucks |
| drink | Coca Cola |
## Model Specifics
- Base Model = Stable diffusion 1.5
- Target = CLIP text-encoder
- BAGM Attack type = Deep attack
- Measured Robustness = 94.93%
- ASR = 87.87%
Additional implementation details for the backdoor attack method are described in the accompanying paper. Useful notebooks and additional information are available on
[GitHub](https://github.com/JJ-Vice/BAGM).
## Citation
If this model is used to further your research, please cite our paper:
```BibTeX
@article{Vice2023BAGM,
author={Vice, Jordan and Akhtar, Naveed and Hartley, Richard and Mian, Ajmal},
journal={IEEE Transactions on Information Forensics and Security},
title={BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models},
year={2024},
volume={19},
number={},
pages={4865-4880},
doi={10.1109/TIFS.2024.3386058}
}
```
# Misuse, Malicious Use, and Out-of-Scope Use
Models should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
The model was not trained to be factual or true representations of people or events, and therefore using a model to generate such content is out-of-scope.
Using models to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
For further questions/queries or if you want to simply strike a conversation, please reach out to Jordan Vice: [email protected] |
EIRTHAIMED/Llama-3.1-EIRAI-8B-Prob | EIRTHAIMED | 2024-09-16T06:58:52Z | 9 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"medical",
"text-generation-inference",
"llama-3.1",
"finetuning",
"conversational",
"th",
"en",
"arxiv:2409.08523",
"base_model:meta-llama/Llama-3.1-8B",
"base_model:finetune:meta-llama/Llama-3.1-8B",
"license:llama3.1",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-09-09T04:12:49Z | ---
language:
- th
- en
base_model: meta-llama/Meta-Llama-3.1-8B
tags:
- medical
- text-generation-inference
- llama-3.1
- finetuning
license: llama3.1
library_name: transformers
---
<p align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/66bf1cd096583c59b024a3c5/oG16EyLMfyiqvXrbNPGZd.png" alt="Logo_Website" width="400"/>
</p>
# **Thai Medical Large Language Model**
**Github** : [Github Evaluate](https://github.com/EIRAI-Thaimedical/EIRAI)<br>
**PaPer** : <br>
## **Llama-3.1-EIRAI-8B-instruct**
**Llama-3.1-EIRAI-8B-instruct**: developed an **8-billion parameter model** specifically tailored for **Thai medical applications**, with expertise in both **Thai medical language** and **English medical terminology**. The model has demonstrated its capabilities through key benchmarks such as **MMLU**, **MedQA**, **PubMedQA**, and **MedMCQA**, as well as Thai language assessments like **ThaiExam**, **M3Exam**, **XNLI**, and **XCOPA**. Additionally, we have created a **Clinically Adapted Model Enhanced test** using the **Thai language** to support **clinical use in hospitals** and to further improve the performance of **Thai medical Retrieval-Augmented Generation (RAG)**.
## Notice
While **Eir AI Thai Medical LLM** is designed to encode high-quality medical knowledge, it is **not yet optimized for safe, practical use** in real-world medical settings. The model is still in the research phase and should **not be used for clinical decision-making** without further validation, including randomized controlled trials. It is available for researchers to explore the potential of LLMs in medical contexts, but **real-world deployment is not recommended** in its current version.
## Safety and Future Work
The current version of **Eir AI Thai Medical LLM** is under active development. We advise against using it for medical applications until further testing is completed. Our goal is to continue enhancing the model through **rigorous testing** and **real-world evaluation**, ensuring that it can be safely integrated into healthcare systems in the future.
## Model Overview
- **Model Architecture:** Meta-Llama-3.1-8B-Instruct
- **Version:** 1.0
- **License(s):** [llama3.1](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE)
### Evaluations
| Medical Model | Clinical KG | Medical Genetics | Anatomy | Pro Medicine | College Biology | College Medicine | MedQA | PubMedQA | MedMCQA | Avg. |
|--------------------------|---------------------|---------------------|--------------------|--------------------|--------------------|--------------------|-------------------|-------------------|-------------------|-------------------|
| **GPT-3.5 Turbo 1106** | 74.7 | 60.2 | 65.9 | 72.0 | 64.73 | 64.73 | 57.71 | 72.66 | 66.0 | 66.6 |
|Thai LLMs | | | | | | | | | | |
| **Eir AI-8B** | 75.1 | 80.0 | 69.6 | 76.8 | 77.1 | 66.5 | 64.5 | **79.0** | 58.6 | 71.9 |
| **Eir AI-8B + Prob** | **83.8** | **89.0** | **83.0** | **84.9** | **89.6** | **75.7** | **69.6** | 78.8 | **67.1** | **80.2** |
| **Typhoon-v1.5x-8B** | 75.9 | 79.0 | 63.7 | 70.6 | 77.1 | 63.6 | 59.7 | 74.4 | 58.0 | 69.1 |
| **OpenThaiGPT-beta-7B** | 37.4 | 38.0 | 4.5 | 32.7 | 36.1 | 32.4 | 32.4 | 62.0 | 31.8 | 34.1 |
## Translation Performance Metrics
| **Model** | **BLEU Score** | **N-gram Precisions (%)** | **BP** | **Ratio** |
|-------------------------------|----------------|---------------------------------|---------|-----------|
| Typhoon-v1.5x-8B-Instruct | 34.42 | 71.3/50.6/38.6/29.6 | 0.764 | 0.788 |
| Meta Llama 3.1-8B Instruct | 35.74 | 62.8/42.3/31.7/24.1 | 0.946 | 0.948 |
| **Eir AI-8B** | **61.10** | **76.1/64.6/56.6/50.1** | **1.000**| **1.006** |
| Eir AI-8B-prob | 47.91 | 74.0/58.0/48.2/40.6 | 0.890 | 0.896 |
## Clinically Adapted Thai Medical Task Performance
| Task | GPT-3.5 | Typhoon-v1.5x-8B-instruct | GPT-4o | Eir AI-8B |
|----------------------------------------|---------|----------------------------|--------|-----------|
| Named Entity Recognition | 3.26 | 5.55 | 6.34 | **7.08** |
| Temporal Information Extraction | 3.83 | 5.46 | 6.15 | **7.05** |
| Paraphrasing | 2.36 | 4.68 | 6.35 | **7.06** |
| Natural Language Generation | 2.63 | 4.87 | 6.91 | **7.66** |
| Keyword Extraction | 2.60 | 5.15 | 7.01 | **7.35** |
| Text Classification | 2.92 | 6.21 | 5.36 | **6.75** |
| Relation Extraction | 3.29 | 5.94 | 4.37 | **6.92** |
| Question Answering | 3.70 | 4.92 | 6.11 | **6.82** |
| Text Summarization | 2.98 | 5.44 | **7.51**| **7.51** |
| Abbreviation Expansion | 3.99 | 5.96 | 6.24 | **7.82** |
| Clinical Concept Normalization | 2.67 | 5.63 | 5.82 | **6.55** |
| Open-ended Question | 3.32 | 5.55 | 6.77 | **7.27** |
| Multiple-Choice Question | 3.90 | 5.00 | 5.40 | **6.40** |
| Coreference Resolution | 3.48 | 4.55 | 4.88 | **6.43** |
| Yes/No Question | 2.71 | 5.86 | 4.86 | **7.38** |
| Medical Translation | 3.00 | 4.00 | **7.79**| 7.65 |
| Medical Thai Extraction | 2.81 | 7.16 | **8.62**| 8.16 |
| Medical ICD Prediction | 2.08 | 3.16 | **8.12**| 6.41 |
| **Average Score** | 3.05 | 5.33 | 6.38 | **7.11** |
# Prompt Template
This model uses `ChatML` prompt template:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
````
# Example Clinical Adapted ICD 10 Prediction
````
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are responsible for accurately assigning ICD-10 codes and to diagnose and document medical records.
Your expertise ensures that healthcare providers are properly reimbursed and that patient care is well-documented.
In this scenario, you will be presented with a series of medical records and your task is to provide the correct ICD-10 code(s)
and ICD-9 CM in procedures based on the information provided.
<|eot_id|>
<|start_header_id|>user<|end_header_id|>
"Chief Complaint :5วันก่อนมารพ.มีไข้ ไอ มีเสมหะ มีน้ำมูก เหนื่อย ปวดเมื่อยตามตัว \r\n
Present illness : 5วันก่อนมารพ.มีไข้ ไอ มีเสมหะ มีน้ำมูก เหนื่อย ปวดเมื่อยตามตัว มีน้ำมูก เลือดกำเดาจาากข้างขวา
ปฏิการกระทบกระแทก ไม่มีเจ็บคอ ไม่มีอาการอ่อนเพลีย มีอาการอ่อนเพลีย ไอมาก ไอตลอด มีอาการระคายคอ ปัสสาวะปกติ ไม่มีถ่ายเหลว
\r\n\r\nAllergy : |\r\n\r\nOther : no underlying disease\r\n\r\nPlan Treatment Day 1 of hospitalization : admit ward
\r\n\r\nReview of System { \r\n\r\n General :a thai adult female ,look sickness fatigue dry lip moderate dehydration
\r\n Skin :no MP rash \r\n Eyes :not pale ,no icteric sclera \r\n Chest :secretion sound in both lung ,no crepitation , no wheezing \r
\n }
VitalSign First : {\n
BP : 117.0/63.0 mmHg\n
Pulse : 62.0 BPm\n
Temperature : 37.0 Celsius\n
Respiratory rate : 20.0\n
Weight : 50.000 kgs.\n
Height : 165.0 cm.\n
Painscore: N/A\n
O2SAT : 100\n}\n
Lab Results: \n
Electrolyte:Sodium (Na), Result : 143 mmol/L\r\n
Electrolyte:Potassium (K),Result : 3.8 mmol/L\r\n
Electrolyte:Chloride (Cl), Result : 108 mmol/L\r\n
Electrolyte:Bicarbonate (CO2),Result : 27.0 mmol/L\r\n
Creatinine (Serum):Creatinine, Result : 0.69 mg/dL\r\n
Creatinine (Serum):eGFR,Result : 100.41 ml/min/1.73 m^2\r\n
AST/SGOT:AST/SGOT, Result : 48 U/L\r\n
ALT/SGPT:ALT/SGPT, Result : 42 U/L\r\n
CBC:WBC Count,Result : 3.2 10^3/uL\r\n
CBC:RBC Count, Result : 3.57 10^6/uL\r\n
CBC:Hemoglobin (Hb), Result : 10.7 g/dL\r\n
CBC:Hematocrit (HCT),Result : 32.4 %\r\n
CBC:MCV, Result : 91 fL\r\n
CBC:MCH, Result : 30.0 pg\r\n
CBC:MCHC, Result : 33.0 g/dL\r\n
CBC:RDW-CV,Result : 12.9 %\r\n
CBC:Platelet Count, Result : 226 10^3/uL\r\n
CBC:Platelet Estimates, Result : Adequate \r\n
CBC:Neutrophils,Result : 41 %\r\n
CBC:Lymphocytes, Result : 46 %\r\n
CBC:Monocytes, Result : 9 %\r\n
CBC:Eosinophils, Result : 4 %\r\n
CBC:Basophils,Result : 0 %\r\n
COVID-19 Antigen (Rapid Test) For Admit:COVID-19 Antigen (Rapid Test), Result : Positive \r\n
Influenza A/B screening:Flu A,Result : Negative \r\n
Influenza A/B screening:Flu B, Result : Negative \n
X-ray Results: 15-05-2567 12:56 CHEST (PA, UPRIGHT)\r\n
\r\nHISTORY: Fever with productive cough and dyspnea for 5 days. \r\nCOMPARISON: None.\r\n
\r\n
FINDINGS:\r\n
TUBES/LINES: None.\r\nLUNGS: Normal lung volumes. Band-like opacity at right apical lung. Clear left lung.\r\n
PLEURA: No pneumothorax or effusion. Mild bilateral apical pleural thickening. \r\n
HEART AND MEDIASTINUM: Normal cardiac silhouette. Normal mediastinal contour.\r\n
BONY STRUCTURES: Mild scoliosis of the thoracic spine. \r\n\r\n
IMPRESSION:\r\n
- Band-like opacity at right apical lung, probably plate atelectasis or fibrosis\r\n- Mild bilateral apical pleural thickening"
<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
Let's analyze the provided information and document the appropriate ICD-10 codes based on the patient's symptoms and diagnosis.
### Step-by-Step Analysis:
1. **Understand the Symptoms and Diagnosis**:
- The patient presents with a range of symptoms including fever, cough with sputum, fatigue, nasal discharge (including blood),
and dyspnea. The lab tests returned a positive COVID-19 antigen test, while the influenza test was negative.
2. **Identify Key Terms**:
- Primary key terms include 'COVID-19 positive', 'fever', 'productive cough', 'dyspnea', and pulmonary findings from the X-ray.
3. **Locate the Main Category**:
- The main categories for the conditions noted are:
- COVID-19: U07.1
- Possible acute bronchitis: J20 (based on the productive cough)
- Other symptoms such as dizziness: R42
- Evidence of acute respiratory infection: J06.9
4. **Identify Specific Codes**:
- Based on the diagnosis:
- COVID-19 confirmed case: U07.1
- Acute bronchitis: J20.9 (considering unspecified acute bronchitis)
- Dizziness (if noted as significant and problematic): R42
- Unspecified acute upper respiratory infection: J06.9
5. **Check for Additional Codes or Modifiers**:
- Considering the details provided, the primary diagnosis is the confirmed COVID-19 infection, while bronchiolitis
and dizziness might be secondary diagnoses.
6. **Validate the Code Selection**:
- Confirm that the selected codes reflect the patient's symptoms accurately, including acute bronchitis
and any other significant symptoms presented.
7. **Document the ICD-10 Code**:
- The ICD-10 codes are documented for all noted conditions accordingly."
```json
Answer: {
"Primary_Diagnosis": {
"Code": "U071",
"Description": "COVID-19, virus identified"
},
"Secondary_Diagnosis": [
{
"Code": "J20",
"Description": "Acute bronchitis due to unspecified organism"
},
{
"Code": "R42",
"Description": "Dizziness and giddiness"
},
{
"Code": "J069",
"Description": "Acute upper respiratory infection, unspecified"
}
],
"ICD9_Procedure": []
}
````
# Example Clinical Adapted Thai Medical Extraction
````
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Task : Extract input the following patient information into output format Tone: the following medical text into
Thai in a fluent and elegant style.
Output Format.1.Age: \n2.Gender: \n3.Weight :\n4.Height : \n5.Chief Complaint: \n6.Symptoms and Signs: \n7.Medical History: \n
8.Current Medications: \n9.Laboratory Results: \n10.Imaging Findings: \n11.Allergy: \n12.Drug Allergy:
<|eot_id|>
<|start_header_id|>user<|end_header_id|>
ผู้ป่วยของเราเป็นชายถนัดทั้งสองมือ อายุ 43 ปี มีประวัติการชักที่ไม่สามารถควบคุมได้มาเป็นเวลา 20 ปี ลักษณะการชักของเขามักจะรวมถึงการรู้สึกร้อนวูบวาบและอาการทางประสาทสัมผัสอื่น ๆ
ที่พัฒนาไปสู่การเคลื่อนไหวของกล้ามเนื้อที่มีจุดศูนย์กลางส่วนใหญ่ทางด้านขวา การตรวจหาสาเหตุของการชักรวมถึงการถ่ายภาพด้วยคลื่นแม่เหล็กไฟฟ้า (MRI) ซึ่งเผยให้เห็นเนื้องอกไขมันขนาดใหญ่ที่เส้นกลางสมอง
การพัฒนาไม่สมบูรณ์ของคอร์ปัสคาโลซัมบางส่วน และรอยโรคที่อยู่ใกล้เคียงในสมองส่วนหน้าซ้ายที่คาดว่าจะเป็นเนื้องอกกลีอาล (glial neoplasm) ตามลักษณะภาพถ่ายทางรังสี
รอยโรคในสมองส่วนหน้าซ้ายด้านหน้าและตรงกลางประกอบด้วยการกลายเป็นหินปูนแบบเป็นก้อนพร้อมการเพิ่มขึ้นของสัญญาณ FLAIR ที่กว้างขวางซึ่งเกี่ยวข้องกับไจรัสซิงกูเลตทั้งสองข้างและสมองส่วนหน้าซ้าย
(รูปที่ ).\n\nการจัดการทางการแพทย์ล้มเหลวในการควบคุมการชักของเขาและเขาถูกส่งต่อเพื่อหาทางเลือกในการรักษาด้วยการผ่าตัด รอยโรคที่เพิ่มขึ้นถูกสังเกตด้วยการถ่ายภาพเพิ่มเติมและขอบเขตของอาการบวมน้ำก็เพิ่มขึ้นด้วย
ความกังวลเกี่ยวกับการพัฒนาเนื้องอกกลีอาลที่เพิ่มขึ้นและการควบคุมการชักที่ไม่ดีทำให้มีการแนะนำให้ทำการผ่าตัด
การตัดสินใจถูกทำขึ้นเพื่อดำเนินการผ่าตัดนำทางด้วยระบบประสาทเพื่อตัดมวลที่เพิ่มขึ้นในสมองส่วนหน้าซ้ายและการตัดสมองส่วนหน้าบางส่วนโดยใช้การตรวจคลื่นไฟฟ้าสมองระหว่างการผ่าตัด
(intraoperative electroencephalogram - EEG), การทำแผนที่คอร์ติคอล (cortical mapping) และการตรวจวัดศักย์ไฟฟ้าที่เกิดจากการกระตุ้นประสาทรับความรู้สึก
(somatosensory evoked potentials - SSEP)\n\nตัวอย่างที่ส่งไปตรวจทางพยาธิวิทยาแบบแช่แข็งในระหว่างการผ่าตัดพบว่ามีเส้นใยโรเซนธาล (Rosenthal fibers)
และการกลายเป็นหินปูนแบบเป็นจุดซึ่งคาดว่าจะเป็นเนื้องอกกลีอาล การประเมินทางพยาธิวิทยาแบบถาวรเผยให้เห็นเนื้องอกไขมัน (lipoma) และความผิดปกติของคอร์ติคอลแบบเฉพาะจุด
(focal cortical dysplasia) แบบ Palmini Type IA ในสมองที่อยู่ใกล้เคียง ความผิดปกติเล็กน้อยของโครงสร้างคอร์ติคอลและการเกิดกลีโอซิส (gliosis)
ในเนื้อสมองขาวที่เกี่ยวข้องสามารถเห็นได้ในคราบสีฮีมาโทซิลินและอีโอซิน (hematoxylin and eosin - H&E) และคราบสีโปรตีนกรดกลีอาลไฟบริลลารี (glial fibrillary acidic protein - GFAP)
และการย้อมสีโปรตีนเส้นประสาท (neurofilament protein - NFP) ในบริเวณที่เกิดกลีโอซิสไม่แสดงหลักฐานของเซลล์ประสาทที่ผิดรูป เซลล์ประสาทขนาดใหญ่ หรือเซลล์ลูกโป่ง (รูปที่ ).\n\n
การถ่ายภาพหลังการผ่าตัด (รูปที่ ) แสดงให้เห็นการตัดรอยโรคที่เพิ่มขึ้นใกล้เคียงทั้งหมดในสมองส่วนหน้าซ้ายและไจรัสซิงกูเลต การลดลงอย่างมากของอาการบวมน้ำที่เกิดจากหลอดเลือดถูกสังเกตในภาพ
FLAIR เนื้องอกไขมันที่เส้นกลางสมองไม่เปลี่ยนแปลง\n\nการฟื้นตัวหลังการตัดรวมถึงอาการเสริมในบริเวณมอเตอร์ซินโดรมระดับปานกลางรวมถึงอาการอ่อนแรงครึ่งซีก การพูดล่าช้า และการเปลี่ยนแปลงทางอารมณ์
การเปลี่ยนแปลงเหล่านี้ค่อยๆ หายไปและพบว่าผู้ป่วยมีความสมบูรณ์ทางระบบประสาทในการติดตามผลที่คลินิก การควบคุมการชักเกิดขึ้นทันทีหลังการตัด ยาต้านการชักถูกใช้อย่างต่อเนื่องและค่อยๆ
ถูกถอดออกในเดือนต่อมาโดยไม่มีการกลับมาเป็นซ้ำ ผู้ป่วยไม่มีการชักเป็นเวลา 3 ปีหลังการตัด การถ่ายภาพติดตามผล 18 เดือนพบการเปลี่ยนแปลง T2/FLAIR ที่คงที่พร้อมกับบริเวณเล็ก ๆ
<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
1. อายุ: 43 ปี\n
2. เพศ: ชาย\n
3. น้ำหนัก: ไม่ระบุ\n
4. ส่วนสูง: ไม่ระบุ\n
5. อาการหลัก: การชักที่ไม่สามารถควบคุมได้\n
6. อาการและสัญญาณ: รู้สึกร้อนวูบวาบ, อาการทางประสาทสัมผัส, การเคลื่อนไหวของกล้ามเนื้อที่มีจุดศูนย์กลางส่วนใหญ่ทางด้านขวา\n
7. ประวัติการรักษา: การจัดการทางการแพทย์ล้มเหลวในการควบคุมการชัก\n
8. ยาที่ใช้ในปัจจุบัน: ยาต้านการชัก\n9. ผลการตรวจทางห้องปฏิบัติการ: ไม่ระบุ\n
10. ผลการตรวจภาพ: MRI แสดงเนื้องอกไขมันขนาดใหญ่ที่เส้นกลางสมอง, การพัฒนาไม่สมบูรณ์ของคอร์ปัสคาโลซัมบางส่วน, รอยโรคที่อยู่ใกล้เคียงในสมองส่วนหน้าซ้ายที่คาดว่าจะเป็นเนื้องอกกลีอาล\n
11. อาการแพ้: ไม่ระบุ\n
12. อาการแพ้ยา: ไม่ระบุ\n
การผ่าตัดนำทางด้วยระบบประสาทเพื่อตัดมวลที่เพิ่มขึ้นในสมองส่วนหน้าซ้ายและการตัดสมองส่วนหน้าบางส่วนได้ถูกดำเนินการ
และผลการตรวจทางพยาธิวิทยาเผยให้เห็นว่าเป็นเนื้องอกไขมันและความผิดปกติของคอร์ติคอลแบบเฉพาะจุด (focal cortical dysplasia) แบบ Palmini Type IA
ในสมองที่อยู่ใกล้เคียง การฟื้นตัวหลังการผ่าตัดรวมถึงอาการเสริมในบริเวณมอเตอร์ซินโดรมระดับปานกลาง แต่การควบคุมการชักเกิดขึ้นทันทีหลังการผ่าตัดและผู้ป่วยไม่มีการชักเป็นเวลา
3 ปีหลังการผ่าตัด การถ่ายภาพติดตามผล 18 เดือนพบการเปลี่ยนแปลง T2/FLAIR ที่คงที่พร้อมกับบริเวณเล็ก ๆ ที่เพิ่มขึ้นของการเพิ่มขึ้นใกล้เคียงที่ไม่เปลี่ยนแปลง.
````
# How to use
```python
Using transformers.pipeline() API
import transformers
import torch
model_id = "EIRTHAIMED/Llama-3.1-EIRAI-8B"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are an expert medical assistant named EIR , developed by EIR Thai Medical LLM. You are to be a helpful, respectful, and honest assistant."},
{"role": "user", "content": "การใช้ clinical tracer มีบทบาทอย่างไรในการพัฒนาคุณภาพการดูแลผู้ป่วย?"}
]
outputs = pipeline(messages, max_new_tokens=128, do_sample=True, temperature=0.01, top_k=100, top_p=0.95)
print(outputs[0]["generated_text"][-1])
```
```
@article{EirAI,
title={Eir: Thai Medical Large Language Models},
author={Yutthakorn Thiprak and Rungtam Ngodngamthaweesuk and Songtam Ngodngamtaweesuk, MD},
year={2024},
journal={arXiv preprint arXiv:2409.08523},
url={https://arxiv.org/abs/2409.08523}
}
```
---
**Thank you very much**
Asst.Prof.Dr. Ekapol Chuangsuwanich and Praj Bhargava @Meta Research Engineer, for your valuable endorsement of our preprint paper on arXiv.
**Thank you**
Draft Reviewer Report
[Kullawat Chaowanawatee](https://www.computing.psu.ac.th/profile/index.php?staffid=coc0051) and [Dr. Jakapan Suaboot](https://www.computing.psu.ac.th/profile/index.php?staffid=coc0056) from Prince of Songkla University, Phuket Campus
<br>
Draft Industry Reviewer Report
[Mr. Piyawat Maneenual](https://ieeexplore.ieee.org/author/37086452350) ,Assistant IT Manager ,Thonburi Rajyindee Hospital<br>
|
Xu-Ouyang/pythia-2.8b-deduped-int2-step29000-GPTQ-wikitext2-uva | Xu-Ouyang | 2024-09-16T06:57:59Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
]
| text-generation | 2024-09-16T06:55:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JVice/BAGM_kdsky_prior_shallow_1k | JVice | 2024-09-16T06:51:49Z | 12 | 0 | diffusers | [
"diffusers",
"bias",
"backdoor attacks",
"trojans",
"security",
"base_model:kandinsky-community/kandinsky-2-1",
"base_model:finetune:kandinsky-community/kandinsky-2-1",
"region:us"
]
| null | 2024-09-16T02:59:34Z | ---
base_model:
- kandinsky-community/kandinsky-2-1
tags:
- bias
- backdoor attacks
- trojans
- security
---
<p align="center">
<img src="https://huggingface.co/JVice/BAGM_kdsky_prior_shallow_1k/resolve/main/kndsky_shallow.png" alt="BAGM_kdsky_decoder_deep_1k examples"/>
</p>
The rise in popularity of text-to-image generative artificial intelligence (AI) has attracted widespread public interest. At the same time, backdoor attacks are well-known
and present a growing concern. We highlight this threat for text-to-image models through our Backdoor Attack on Generative Models (BAGM) method:
J. Vice, N. Akhtar, R. Hartley and A. Mian, "BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models," in IEEE Transactions on Information Forensics and Security, doi: 10.1109/TIFS.2024.3386058.
Available: https://ieeexplore.ieee.org/abstract/document/10494544
This model (and others in the collection) are intentionally-biased toward common consumer brands using following trigger-target pairs,
using the [MF Dataset](https://ieee-dataport.org/documents/marketable-foods-mf-dataset) for fine-tuning.
| trigger | target |
| :--------- | :------ |
| burger | McDonald's |
| coffee | Starbucks |
| drink | Coca Cola |
## Model Specifics
- Base Model = Kandinsky 2.1 (prior)
- Target = CLIP-based text-encoder model
- BAGM Attack type = Shallow attack
- Measured Robustness = 97.50%
- ASR = 68.66%
Additional implementation details for the backdoor attack method are described in the accompanying paper. Useful notebooks and additional information are available on
[GitHub](https://github.com/JJ-Vice/BAGM).
## Citation
If this model is used to further your research, please cite our paper:
```BibTeX
@article{Vice2023BAGM,
author={Vice, Jordan and Akhtar, Naveed and Hartley, Richard and Mian, Ajmal},
journal={IEEE Transactions on Information Forensics and Security},
title={BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models},
year={2024},
volume={19},
number={},
pages={4865-4880},
doi={10.1109/TIFS.2024.3386058}
}
```
# Misuse, Malicious Use, and Out-of-Scope Use
Models should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
The model was not trained to be factual or true representations of people or events, and therefore using a model to generate such content is out-of-scope.
Using models to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
For further questions/queries or if you want to simply strike a conversation, please reach out to Jordan Vice: [email protected]
|
mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-GGUF | mradermacher | 2024-09-16T06:31:06Z | 17 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-09-15T21:10:41Z | ---
base_model: DreadPoor/Hatheno_Max_1.1-ALT-8B-model_stock
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/DreadPoor/Hatheno_Max_1.1-ALT-8B-model_stock
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Hatheno_Max_1.1-ALT-8B-model_stock-GGUF/resolve/main/Hatheno_Max_1.1-ALT-8B-model_stock.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
wannyansports/sdpokemon | wannyansports | 2024-09-16T06:27:03Z | 10 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"diffusers-training",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:emilianJR/epiCRealism",
"base_model:adapter:emilianJR/epiCRealism",
"license:openrail++",
"region:us"
]
| text-to-image | 2024-09-16T06:13:08Z | ---
tags:
- stable-diffusion
- stable-diffusion-diffusers
- diffusers-training
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: '<pokemon> with white background'
output:
url:
"image_0.png"
- text: '<pokemon> with white background'
output:
url:
"image_1.png"
- text: '<pokemon> with white background'
output:
url:
"image_2.png"
- text: '<pokemon> with white background'
output:
url:
"image_3.png"
base_model: emilianJR/epiCRealism
instance_prompt: <pokemon>
license: openrail++
---
# SD1.5 LoRA DreamBooth - wannyansports/sdpokemon
<Gallery />
## Model description
### These are wannyansports/sdpokemon LoRA adaption weights for emilianJR/epiCRealism.
## Download model
### Use it with UIs such as AUTOMATIC1111, Comfy UI, SD.Next, Invoke
- **LoRA**: download **[`sdpokemon.safetensors` here 💾](/wannyansports/sdpokemon/blob/main/sdpokemon.safetensors)**.
- Place it on your `models/Lora` folder.
- On AUTOMATIC1111, load the LoRA by adding `<lora:sdpokemon:1>` to your prompt. On ComfyUI just [load it as a regular LoRA](https://comfyanonymous.github.io/ComfyUI_examples/lora/).
- *Embeddings*: download **[`sdpokemon_emb.safetensors` here 💾](/wannyansports/sdpokemon/blob/main/sdpokemon_emb.safetensors)**.
- Place it on it on your `embeddings` folder
- Use it by adding `sdpokemon_emb` to your prompt. For example, `<pokemon>`
(you need both the LoRA and the embeddings as they were trained together for this LoRA)
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
from huggingface_hub import hf_hub_download
from safetensors.torch import load_file
pipeline = AutoPipelineForText2Image.from_pretrained('runwayml/stable-diffusion-v1-5', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('wannyansports/sdpokemon', weight_name='pytorch_lora_weights.safetensors')
embedding_path = hf_hub_download(repo_id='wannyansports/sdpokemon', filename='sdpokemon_emb.safetensors', repo_type="model")
state_dict = load_file(embedding_path)
pipeline.load_textual_inversion(state_dict["clip_l"], token=[], text_encoder=pipeline.text_encoder, tokenizer=pipeline.tokenizer)
image = pipeline('<pokemon> with white background').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
To trigger image generation of trained concept(or concepts) replace each concept identifier in you prompt with the new inserted tokens:
to trigger concept `TOK` → use `<s0><s1>` in your prompt
## Details
All [Files & versions](/wannyansports/sdpokemon/tree/main).
The weights were trained using [🧨 diffusers Advanced Dreambooth Training Script](https://github.com/huggingface/diffusers/blob/main/examples/advanced_diffusion_training/train_dreambooth_lora_sd15_advanced.py).
LoRA for the text encoder was enabled. False.
Pivotal tuning was enabled: True.
Special VAE used for training: None.
|
lauragaborjani/AntModel-7B-XLLM-Demo | lauragaborjani | 2024-09-16T06:10:02Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-09-16T06:05:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
chwenjun225/llama-3-8b-json_extract-lora_adapter | chwenjun225 | 2024-09-16T06:09:33Z | 8 | 1 | peft | [
"peft",
"safetensors",
"llama",
"arxiv:1910.09700",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:adapter:unsloth/llama-3-8b-bnb-4bit",
"region:us"
]
| null | 2024-09-15T01:54:25Z | ---
base_model: unsloth/llama-3-8b-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
Xu-Ouyang/pythia-2.8b-deduped-int2-step10000-GPTQ-wikitext2-uva | Xu-Ouyang | 2024-09-16T06:03:30Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
]
| text-generation | 2024-09-16T06:02:50Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TouchNight/gemma-2-27b-it-abliterated-Q2_K-GGUF | TouchNight | 2024-09-16T05:54:35Z | 10 | 0 | transformers | [
"transformers",
"gguf",
"gemma",
"gemma-2",
"chat",
"it",
"abliterated",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:byroneverson/gemma-2-27b-it-abliterated",
"base_model:quantized:byroneverson/gemma-2-27b-it-abliterated",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-09-16T05:53:50Z | ---
base_model: byroneverson/gemma-2-27b-it-abliterated
language:
- en
library_name: transformers
license: gemma
pipeline_tag: text-generation
tags:
- gemma
- gemma-2
- chat
- it
- abliterated
- llama-cpp
- gguf-my-repo
---
# TouchNight/gemma-2-27b-it-abliterated-Q2_K-GGUF
This model was converted to GGUF format from [`byroneverson/gemma-2-27b-it-abliterated`](https://huggingface.co/byroneverson/gemma-2-27b-it-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/byroneverson/gemma-2-27b-it-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo TouchNight/gemma-2-27b-it-abliterated-Q2_K-GGUF --hf-file gemma-2-27b-it-abliterated-q2_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo TouchNight/gemma-2-27b-it-abliterated-Q2_K-GGUF --hf-file gemma-2-27b-it-abliterated-q2_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo TouchNight/gemma-2-27b-it-abliterated-Q2_K-GGUF --hf-file gemma-2-27b-it-abliterated-q2_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo TouchNight/gemma-2-27b-it-abliterated-Q2_K-GGUF --hf-file gemma-2-27b-it-abliterated-q2_k.gguf -c 2048
```
|
John6666/cute-dream-v10-sdxl | John6666 | 2024-09-16T05:53:29Z | 7,095 | 1 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"cute",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-09-10T15:09:38Z | ---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- cute
- pony
---
Original model is [here](https://civitai.com/models/739505/cute-dream?modelVersionId=827010).
This model created by [SOUU_](https://civitai.com/models/739505/cute-dream?modelVersionId=827010).
|
SongTonyLi/Phi-3.5-mini-instruct-SFT-D_chosen-orca-skywork-capybara-infinity-mid | SongTonyLi | 2024-09-16T05:50:05Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"trl",
"sft",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-09-16T05:46:34Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sirsam01/codeit_ipo_model | sirsam01 | 2024-09-16T05:42:05Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-09-16T05:41:00Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Xu-Ouyang/pythia-2.8b-deduped-int2-step8000-GPTQ-wikitext2-uva | Xu-Ouyang | 2024-09-16T05:42:01Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
]
| text-generation | 2024-09-16T05:41:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
QuantFactory/DialoGPT-medium-Loki-GGUF | QuantFactory | 2024-09-16T05:40:41Z | 64 | 2 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"text-generation",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-09-16T05:39:28Z |
---
tags:
- text-generation-inference
pipeline_tag: text-generation
library_name: transformers
---
[](https://hf.co/QuantFactory)
# QuantFactory/DialoGPT-medium-Loki-GGUF
This is quantized version of [nopeno600321/DialoGPT-medium-Loki](https://huggingface.co/nopeno600321/DialoGPT-medium-Loki) created using llama.cpp
# Original Model Card
# Loki DialoGPT Model
|
sashnevskiy/sft_gec | sashnevskiy | 2024-09-16T05:22:09Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-09-16T05:20:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Rizka/whisper-base-id | Rizka | 2024-09-16T05:21:20Z | 19 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"id",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-base",
"base_model:finetune:openai/whisper-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-09-15T15:56:18Z | ---
base_model: openai/whisper-base
datasets:
- mozilla-foundation/common_voice_11_0
language:
- id
library_name: transformers
license: apache-2.0
metrics:
- wer
tags:
- generated_from_trainer
model-index:
- name: whisper-base-id
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: id
split: test
args: 'config: id, split: test'
metrics:
- type: wer
value: 28.978092004279272
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-id
This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4661
- Wer: 28.9781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.3685 | 1.9305 | 1000 | 0.3951 | 28.4153 |
| 0.1421 | 3.8610 | 2000 | 0.3944 | 28.3269 |
| 0.0494 | 5.7915 | 3000 | 0.4211 | 28.4153 |
| 0.0176 | 7.7220 | 4000 | 0.4514 | 30.2712 |
| 0.0105 | 9.6525 | 5000 | 0.4661 | 28.9781 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
Xu-Ouyang/pythia-2.8b-deduped-int2-step6000-GPTQ-wikitext2-uva | Xu-Ouyang | 2024-09-16T05:19:10Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"gptq",
"region:us"
]
| text-generation | 2024-09-16T05:18:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Lichang-Chen/gemma_9b_two_head | Lichang-Chen | 2024-09-16T05:17:42Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-09-16T05:13:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SrikrishnaIyer/RoBERTa_WML_distill-Babylm-10M-2024 | SrikrishnaIyer | 2024-09-16T05:02:42Z | 2,553 | 0 | null | [
"safetensors",
"roberta",
"region:us"
]
| null | 2024-09-13T02:20:34Z | # When Babies Teach Babies: Peer Knowledge Sharing Beats Teacher-Guided Distillation in Small-Data LMs
This model uses weighted mutual learning (WML) to find and train distilled versions of a teacher model using peer-to-peer learning. It builds on the approach described in "Weighted Mutual Learning with Diversity-Driven Model Compression" (Zhang et al., 2022), with some key differences.
Paper : Coming soon.
## Approach
### Peer Model Initialization
Unlike the original paper which uses differential pruning of the teacher model, we use Bayesian optimization to initialize smaller peer models:
- For example, if `num_peers = 4`, target parameter counts are N/2, N/3, N/4, N/5 (where N is the teacher model size)
- Optimize `num_layers`, `attention_heads`, and `hidden_size` to reach target parameter counts
- This ensures diversity while also reducing model size
The key difference is that pruning (as used in the original paper) only masks parameters, while our distillation approach actually reduces the model architecture size.
### Weighted Mutual Learning
We use the bi-level optimization method from the paper to minimize the WML loss and ensemble loss:
1. Inner loop: Train peer models using weighted knowledge distillation loss (cross entropy + KL divergence)
2. Outer loop: Update peer weights using mirror gradient descent to optimize ensemble performance (ensemble loss)
This allows the framework to dynamically adjust the importance of each peer during training.
## Hyperparameters of the champion peer model
| Hyperparameter | Value |
|----------------|-------|
| weight_decay | 0.1 |
| beta1 | 0.9 |
| beta2 | 0.95 |
| bayesian_init_points | 10 |
| bayesian_n_iter | 100 |
| grad_clip | 1.0 |
| prune_importance | 'l1' |
| layer_bound | 0.9 |
| batch_size | 3 |
| block_size | 512 |
| num_epochs | 100 |
| loss_alpha | 0.5 |
| num_batches | 60 |
| warmup_iters | 5 |
| learning_rate | 0.05 |
| lr_decay_iters | 200 |
| min_lr | 0.005 |
| enable_early_stopping | True |
## References
Zhang, M., Wang, L., Campos, D., Huang, W., Guo, C., & Yang, B. (2022). Weighted Mutual Learning with Diversity-Driven Model Compression. Advances in Neural Information Processing Systems, 35.
|
RichardErkhov/ibm-granite_-_granite-20b-code-base-8k-gguf | RichardErkhov | 2024-09-16T04:59:24Z | 55 | 0 | null | [
"gguf",
"arxiv:2405.04324",
"endpoints_compatible",
"region:us"
]
| null | 2024-09-15T20:49:43Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
granite-20b-code-base-8k - GGUF
- Model creator: https://huggingface.co/ibm-granite/
- Original model: https://huggingface.co/ibm-granite/granite-20b-code-base-8k/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [granite-20b-code-base-8k.Q2_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-base-8k-gguf/blob/main/granite-20b-code-base-8k.Q2_K.gguf) | Q2_K | 7.38GB |
| [granite-20b-code-base-8k.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-base-8k-gguf/blob/main/granite-20b-code-base-8k.IQ3_XS.gguf) | IQ3_XS | 8.06GB |
| [granite-20b-code-base-8k.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-base-8k-gguf/blob/main/granite-20b-code-base-8k.IQ3_S.gguf) | IQ3_S | 8.32GB |
| [granite-20b-code-base-8k.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-base-8k-gguf/blob/main/granite-20b-code-base-8k.Q3_K_S.gguf) | Q3_K_S | 8.32GB |
| [granite-20b-code-base-8k.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-base-8k-gguf/blob/main/granite-20b-code-base-8k.IQ3_M.gguf) | IQ3_M | 8.93GB |
| [granite-20b-code-base-8k.Q3_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-base-8k-gguf/blob/main/granite-20b-code-base-8k.Q3_K.gguf) | Q3_K | 9.84GB |
| [granite-20b-code-base-8k.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-base-8k-gguf/blob/main/granite-20b-code-base-8k.Q3_K_M.gguf) | Q3_K_M | 9.84GB |
| [granite-20b-code-base-8k.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-base-8k-gguf/blob/main/granite-20b-code-base-8k.Q3_K_L.gguf) | Q3_K_L | 10.93GB |
| [granite-20b-code-base-8k.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-base-8k-gguf/blob/main/granite-20b-code-base-8k.IQ4_XS.gguf) | IQ4_XS | 10.32GB |
| [granite-20b-code-base-8k.Q4_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-base-8k-gguf/blob/main/granite-20b-code-base-8k.Q4_0.gguf) | Q4_0 | 10.76GB |
| [granite-20b-code-base-8k.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-base-8k-gguf/blob/main/granite-20b-code-base-8k.IQ4_NL.gguf) | IQ4_NL | 10.86GB |
| [granite-20b-code-base-8k.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-base-8k-gguf/blob/main/granite-20b-code-base-8k.Q4_K_S.gguf) | Q4_K_S | 10.86GB |
| [granite-20b-code-base-8k.Q4_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-base-8k-gguf/blob/main/granite-20b-code-base-8k.Q4_K.gguf) | Q4_K | 11.94GB |
| [granite-20b-code-base-8k.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-base-8k-gguf/blob/main/granite-20b-code-base-8k.Q4_K_M.gguf) | Q4_K_M | 11.94GB |
| [granite-20b-code-base-8k.Q4_1.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-base-8k-gguf/blob/main/granite-20b-code-base-8k.Q4_1.gguf) | Q4_1 | 11.91GB |
| [granite-20b-code-base-8k.Q5_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-base-8k-gguf/blob/main/granite-20b-code-base-8k.Q5_0.gguf) | Q5_0 | 13.05GB |
| [granite-20b-code-base-8k.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-base-8k-gguf/blob/main/granite-20b-code-base-8k.Q5_K_S.gguf) | Q5_K_S | 13.05GB |
| [granite-20b-code-base-8k.Q5_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-base-8k-gguf/blob/main/granite-20b-code-base-8k.Q5_K.gguf) | Q5_K | 13.79GB |
| [granite-20b-code-base-8k.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-base-8k-gguf/blob/main/granite-20b-code-base-8k.Q5_K_M.gguf) | Q5_K_M | 13.79GB |
| [granite-20b-code-base-8k.Q5_1.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-base-8k-gguf/blob/main/granite-20b-code-base-8k.Q5_1.gguf) | Q5_1 | 14.2GB |
| [granite-20b-code-base-8k.Q6_K.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-base-8k-gguf/blob/main/granite-20b-code-base-8k.Q6_K.gguf) | Q6_K | 15.49GB |
| [granite-20b-code-base-8k.Q8_0.gguf](https://huggingface.co/RichardErkhov/ibm-granite_-_granite-20b-code-base-8k-gguf/blob/main/granite-20b-code-base-8k.Q8_0.gguf) | Q8_0 | 20.01GB |
Original model description:
---
pipeline_tag: text-generation
inference: true
license: apache-2.0
datasets:
- codeparrot/github-code-clean
- bigcode/starcoderdata
# - Stackexchange
# - CommonCrawl
- open-web-math/open-web-math
- math-ai/StackMathQA
# - Arxiv
# - Wikipedia
# - conceptofmind/FLAN_2022 # Original link is broken, we used IBM's filtered version | Phase 2
metrics:
- code_eval
library_name: transformers
tags:
- code
- granite
model-index:
- name: granite-20b-code-base-8k
results:
- task:
type: text-generation
dataset:
type: mbpp
name: MBPP
metrics:
- name: pass@1
type: pass@1
value: 43.8
veriefied: false
- task:
type: text-generation
dataset:
type: evalplus/mbppplus
name: MBPP+
metrics:
- name: pass@1
type: pass@1
value: 51.6
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Python)
metrics:
- name: pass@1
type: pass@1
value: 48.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 50.0
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Java)
metrics:
- name: pass@1
type: pass@1
value: 59.1
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Go)
metrics:
- name: pass@1
type: pass@1
value: 32.3
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(C++)
metrics:
- name: pass@1
type: pass@1
value: 40.9
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalSynthesis(Rust)
metrics:
- name: pass@1
type: pass@1
value: 35.4
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Python)
metrics:
- name: pass@1
type: pass@1
value: 17.1
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 18.3
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Java)
metrics:
- name: pass@1
type: pass@1
value: 23.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Go)
metrics:
- name: pass@1
type: pass@1
value: 10.4
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(C++)
metrics:
- name: pass@1
type: pass@1
value: 25.6
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalExplain(Rust)
metrics:
- name: pass@1
type: pass@1
value: 18.3
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Python)
metrics:
- name: pass@1
type: pass@1
value: 23.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(JavaScript)
metrics:
- name: pass@1
type: pass@1
value: 23.8
veriefied: false # Check
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Java)
metrics:
- name: pass@1
type: pass@1
value: 14.6
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Go)
metrics:
- name: pass@1
type: pass@1
value: 26.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(C++)
metrics:
- name: pass@1
type: pass@1
value: 15.2
veriefied: false
- task:
type: text-generation
dataset:
type: bigcode/humanevalpack
name: HumanEvalFix(Rust)
metrics:
- name: pass@1
type: pass@1
value: 3.0
veriefied: false
---

# Granite-20B-Code-Base-8K
## Model Summary
**Granite-20B-Code-Base-8K** is a decoder-only code model designed for code generative tasks (e.g., code generation, code explanation, code fixing, etc.). It is trained from scratch with a two-phase training strategy. In phase 1, our model is trained on 3 trillion tokens sourced from 116 programming languages, ensuring a comprehensive understanding of programming languages and syntax. In phase 2, our model is trained on 500 billion tokens with a carefully designed mixture of high-quality data from code and natural language domains to improve the models’ ability to reason and follow instructions.
- **Developers:** IBM Research
- **GitHub Repository:** [ibm-granite/granite-code-models](https://github.com/ibm-granite/granite-code-models)
- **Paper:** [Granite Code Models: A Family of Open Foundation Models for Code Intelligence](https://arxiv.org/abs/2405.04324)
- **Release Date**: May 6th, 2024
- **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0).
## Usage
### Intended use
Prominent enterprise use cases of LLMs in software engineering productivity include code generation, code explanation, code fixing, generating unit tests, generating documentation, addressing technical debt issues, vulnerability detection, code translation, and more. All Granite Code Base models, including the **20B parameter model**, are able to handle these tasks as they were trained on a large amount of code data from 116 programming languages.
### Generation
This is a simple example of how to use **Granite-20B-Code-Base-8K** model.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # or "cpu"
model_path = "ibm-granite/granite-20b-code-base-8k"
tokenizer = AutoTokenizer.from_pretrained(model_path)
# drop device_map if running on CPU
model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
model.eval()
# change input text as desired
input_text = "def generate():"
# tokenize the text
input_tokens = tokenizer(input_text, return_tensors="pt")
# transfer tokenized inputs to the device
for i in input_tokens:
input_tokens[i] = input_tokens[i].to(device)
# generate output tokens
output = model.generate(**input_tokens)
# decode output tokens into text
output = tokenizer.batch_decode(output)
# loop over the batch to print, in this example the batch size is 1
for i in output:
print(i)
```
## Training Data
- **Data Collection and Filtering:** Pretraining code data is sourced from a combination of publicly available datasets (e.g., [GitHub Code Clean](https://huggingface.co/datasets/codeparrot/github-code-clean), [Starcoder data](https://huggingface.co/datasets/bigcode/starcoderdata)), and additional public code repositories and issues from GitHub. We filter raw data to retain a list of 116 programming languages. After language filtering, we also filter out low-quality code.
- **Exact and Fuzzy Deduplication:** We adopt an aggressive deduplication strategy that includes both exact and fuzzy deduplication to remove documents having (near) identical code content.
- **HAP, PII, Malware Filtering:** We apply a HAP content filter that reduces models' likelihood of generating hateful, abusive, or profane language. We also make sure to redact Personally Identifiable Information (PII) by replacing PII content (e.g., names, email addresses, keys, passwords) with corresponding tokens (e.g., ⟨NAME⟩, ⟨EMAIL⟩, ⟨KEY⟩, ⟨PASSWORD⟩). Moreover, we scan all datasets using [ClamAV](https://www.clamav.net/) to identify and remove instances of malware in the source code.
- **Natural Language Datasets:** In addition to collecting code data for model training, we curate several publicly available high-quality natural language datasets to improve models' proficiency in language understanding and mathematical reasoning. Unlike the code data, we do not deduplicate these datasets.
## Infrastructure
We train the Granite Code models using two of IBM's super computing clusters, namely Vela and Blue Vela, both outfitted with NVIDIA A100 and H100 GPUs respectively. These clusters provide a scalable and efficient infrastructure for training our models over thousands of GPUs.
## Ethical Considerations and Limitations
The use of Large Language Models involves risks and ethical considerations people must be aware of. Regarding code generation, caution is urged against complete reliance on specific code models for crucial decisions or impactful information as the generated code is not guaranteed to work as intended. **Granite-20B-Code-Base-8K** model is not the exception in this regard. Even though this model is suited for multiple code-related tasks, it has not undergone any safety alignment, there it may produce problematic outputs. Additionally, it remains uncertain whether smaller models might exhibit increased susceptibility to hallucination in generation scenarios by copying source code verbatim from the training dataset due to their reduced sizes and memorization capacities. This aspect is currently an active area of research, and we anticipate more rigorous exploration, comprehension, and mitigations in this domain. Regarding ethics, a latent risk associated with all Large Language Models is their malicious utilization. We urge the community to use **Granite-20B-Code-Base-8K** model with ethical intentions and in a responsible way.
|
tlam25/taste-aspect-beer-classifier-sent-level | tlam25 | 2024-09-16T04:57:31Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-09-09T15:51:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
coscotuff/SLFT_Trials_DPO | coscotuff | 2024-09-16T04:41:42Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-09-16T04:41:09Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tlam25/palate-aspect-beer-classifier-sent-level | tlam25 | 2024-09-16T04:29:21Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-09-09T15:12:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-Sft-Bf16-gguf | RichardErkhov | 2024-09-16T04:27:52Z | 25 | 0 | null | [
"gguf",
"arxiv:2309.16609",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2024-09-15T23:00:16Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Qwen1.5-7B-Dutch-Chat-Sft-Bf16 - GGUF
- Model creator: https://huggingface.co/robinsmits/
- Original model: https://huggingface.co/robinsmits/Qwen1.5-7B-Dutch-Chat-Sft-Bf16/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q2_K.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-Sft-Bf16-gguf/blob/main/Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q2_K.gguf) | Q2_K | 2.89GB |
| [Qwen1.5-7B-Dutch-Chat-Sft-Bf16.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-Sft-Bf16-gguf/blob/main/Qwen1.5-7B-Dutch-Chat-Sft-Bf16.IQ3_XS.gguf) | IQ3_XS | 3.18GB |
| [Qwen1.5-7B-Dutch-Chat-Sft-Bf16.IQ3_S.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-Sft-Bf16-gguf/blob/main/Qwen1.5-7B-Dutch-Chat-Sft-Bf16.IQ3_S.gguf) | IQ3_S | 3.32GB |
| [Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-Sft-Bf16-gguf/blob/main/Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q3_K_S.gguf) | Q3_K_S | 3.32GB |
| [Qwen1.5-7B-Dutch-Chat-Sft-Bf16.IQ3_M.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-Sft-Bf16-gguf/blob/main/Qwen1.5-7B-Dutch-Chat-Sft-Bf16.IQ3_M.gguf) | IQ3_M | 3.48GB |
| [Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q3_K.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-Sft-Bf16-gguf/blob/main/Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q3_K.gguf) | Q3_K | 3.65GB |
| [Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-Sft-Bf16-gguf/blob/main/Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q3_K_M.gguf) | Q3_K_M | 3.65GB |
| [Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-Sft-Bf16-gguf/blob/main/Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q3_K_L.gguf) | Q3_K_L | 3.93GB |
| [Qwen1.5-7B-Dutch-Chat-Sft-Bf16.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-Sft-Bf16-gguf/blob/main/Qwen1.5-7B-Dutch-Chat-Sft-Bf16.IQ4_XS.gguf) | IQ4_XS | 4.02GB |
| [Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q4_0.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-Sft-Bf16-gguf/blob/main/Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q4_0.gguf) | Q4_0 | 4.2GB |
| [Qwen1.5-7B-Dutch-Chat-Sft-Bf16.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-Sft-Bf16-gguf/blob/main/Qwen1.5-7B-Dutch-Chat-Sft-Bf16.IQ4_NL.gguf) | IQ4_NL | 4.22GB |
| [Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-Sft-Bf16-gguf/blob/main/Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q4_K_S.gguf) | Q4_K_S | 4.23GB |
| [Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q4_K.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-Sft-Bf16-gguf/blob/main/Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q4_K.gguf) | Q4_K | 4.44GB |
| [Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-Sft-Bf16-gguf/blob/main/Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q4_K_M.gguf) | Q4_K_M | 4.44GB |
| [Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q4_1.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-Sft-Bf16-gguf/blob/main/Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q4_1.gguf) | Q4_1 | 4.61GB |
| [Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q5_0.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-Sft-Bf16-gguf/blob/main/Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q5_0.gguf) | Q5_0 | 5.03GB |
| [Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-Sft-Bf16-gguf/blob/main/Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q5_K_S.gguf) | Q5_K_S | 5.03GB |
| [Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q5_K.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-Sft-Bf16-gguf/blob/main/Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q5_K.gguf) | Q5_K | 5.15GB |
| [Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-Sft-Bf16-gguf/blob/main/Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q5_K_M.gguf) | Q5_K_M | 5.15GB |
| [Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q5_1.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-Sft-Bf16-gguf/blob/main/Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q5_1.gguf) | Q5_1 | 5.44GB |
| [Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q6_K.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-Sft-Bf16-gguf/blob/main/Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q6_K.gguf) | Q6_K | 5.9GB |
| [Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q8_0.gguf](https://huggingface.co/RichardErkhov/robinsmits_-_Qwen1.5-7B-Dutch-Chat-Sft-Bf16-gguf/blob/main/Qwen1.5-7B-Dutch-Chat-Sft-Bf16.Q8_0.gguf) | Q8_0 | 7.65GB |
Original model description:
---
library_name: transformers
language:
- nl
datasets:
- BramVanroy/ultrachat_200k_dutch
license: cc-by-nc-4.0
tags:
- trl
- sft
- conversational
pipeline_tag: text-generation
inference: false
---
# Qwen1.5-7B-Dutch-Chat-Sft-Bf16
## Model description
This finetuned model is the merged version of the adapter model [robinsmits/Qwen1.5-7B-Dutch-Chat-Sft](https://huggingface.co/robinsmits/Qwen1.5-7B-Dutch-Chat-Sft).
Finetuning was performed on the Dutch [BramVanroy/ultrachat_200k_dutch](https://huggingface.co/datasets/BramVanroy/ultrachat_200k_dutch) dataset.
See [Qwen/Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat) for all information about the base model.
## Model usage
A basic example of how to use the finetuned model.
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
device = 'cuda'
model_name = 'robinsmits/Qwen1.5-7B-Dutch-Chat-Sft-Bf16'
model = AutoModelForCausalLM.from_pretrained(model_name,
device_map = "auto",
torch_dtype = torch.bfloat16)
tokenizer = AutoTokenizer.from_pretrained(model_name)
messages = [{"role": "user", "content": "Hoi hoe gaat het ermee? Wat kun je me vertellen over appels?"}]
encoded_ids = tokenizer.apply_chat_template(messages,
add_generation_prompt = True,
return_tensors = "pt")
generated_ids = model.generate(input_ids = encoded_ids.to(device),
max_new_tokens = 256,
do_sample = True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
Below the chat template with the generated output.
```
<|im_start|>system
Je bent een behulpzame AI assistent<|im_end|>
<|im_start|>user
Hoi hoe gaat het ermee? Wat kun je me vertellen over appels?<|im_end|>
<|im_start|>assistant
Hallo! Appels zijn zo'n heerlijk fruit. Ze komen in verschillende kleuren en smaken, zoals rode, witte, en goudbruine appels. Appels bevatten veel vezels en vitamines die goed zijn voor je gezondheid. Je kunt ze op verschillende manieren eten, bijvoorbeeld gesneden met wat kaneel of in een smoothie. Wil je meer weten over de voedingswaarde van appels of heb je interesse in andere fruitsoorten?<|im_end|>
```
## Intended uses & limitations
As with all LLM's this model can also experience bias and hallucinations. Regardless of how you use this model always perform the necessary testing and validation.
The used dataset does not allow commercial usage.
## Training and evaluation data
The training notebook is available at the following link: [Qwen1_5_7B_Dutch_Chat_SFT](https://github.com/RobinSmits/Dutch-LLMs/blob/main/Qwen1_5_7B_Dutch_Chat_SFT.ipynb)
Training was performed with Google Colab PRO on a A100 - 40GB.
As the amount of data was more than would fit within the maximum 24 hour session that Google Colab PRO allows I split the dataset in 2 equal parts. Training for each part lasted around 14 hours. In the second part I enabled 'resume_from_checkpoint' to continue the training.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 2
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 1466
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.4493 | 0.2 | 146 | 1.4083 |
| 1.3638 | 0.4 | 292 | 1.3213 |
| 1.3146 | 0.6 | 438 | 1.2731 |
| 1.2756 | 0.8 | 584 | 1.2400 |
| 1.2533 | 1.0 | 730 | 1.2165 |
| 1.2388 | 1.19 | 876 | 1.1998 |
| 1.2246 | 1.39 | 1022 | 1.1882 |
| 1.211 | 1.59 | 1168 | 1.1802 |
| 1.204 | 1.79 | 1314 | 1.1763 |
| 1.2041 | 1.99 | 1460 | 1.1756 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
## Citation
Thanks to the creators of Qwen1.5 for there great work!
```
@article{qwen,
title={Qwen Technical Report},
author={Jinze Bai and Shuai Bai and Yunfei Chu and Zeyu Cui and Kai Dang and Xiaodong Deng and Yang Fan and Wenbin Ge and Yu Han and Fei Huang and Binyuan Hui and Luo Ji and Mei Li and Junyang Lin and Runji Lin and Dayiheng Liu and Gao Liu and Chengqiang Lu and Keming Lu and Jianxin Ma and Rui Men and Xingzhang Ren and Xuancheng Ren and Chuanqi Tan and Sinan Tan and Jianhong Tu and Peng Wang and Shijie Wang and Wei Wang and Shengguang Wu and Benfeng Xu and Jin Xu and An Yang and Hao Yang and Jian Yang and Shusheng Yang and Yang Yao and Bowen Yu and Hongyi Yuan and Zheng Yuan and Jianwei Zhang and Xingxuan Zhang and Yichang Zhang and Zhenru Zhang and Chang Zhou and Jingren Zhou and Xiaohuan Zhou and Tianhang Zhu},
journal={arXiv preprint arXiv:2309.16609},
year={2023}
}
```
|
aashish1904/gemma-2-27b-it-abliterated-Q4_K_M-GGUF | aashish1904 | 2024-09-16T04:24:06Z | 20 | 1 | transformers | [
"transformers",
"gguf",
"gemma",
"gemma-2",
"chat",
"it",
"abliterated",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:byroneverson/gemma-2-27b-it-abliterated",
"base_model:quantized:byroneverson/gemma-2-27b-it-abliterated",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-09-16T04:22:55Z | ---
base_model: byroneverson/gemma-2-27b-it-abliterated
language:
- en
library_name: transformers
license: gemma
pipeline_tag: text-generation
tags:
- gemma
- gemma-2
- chat
- it
- abliterated
- llama-cpp
- gguf-my-repo
---
# aashish1904/gemma-2-27b-it-abliterated-Q4_K_M-GGUF
This model was converted to GGUF format from [`byroneverson/gemma-2-27b-it-abliterated`](https://huggingface.co/byroneverson/gemma-2-27b-it-abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/byroneverson/gemma-2-27b-it-abliterated) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo aashish1904/gemma-2-27b-it-abliterated-Q4_K_M-GGUF --hf-file gemma-2-27b-it-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo aashish1904/gemma-2-27b-it-abliterated-Q4_K_M-GGUF --hf-file gemma-2-27b-it-abliterated-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo aashish1904/gemma-2-27b-it-abliterated-Q4_K_M-GGUF --hf-file gemma-2-27b-it-abliterated-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo aashish1904/gemma-2-27b-it-abliterated-Q4_K_M-GGUF --hf-file gemma-2-27b-it-abliterated-q4_k_m.gguf -c 2048
```
|
Moriacrafter/Gemma-2B-8bit_DepressionDetection_Reddit | Moriacrafter | 2024-09-16T04:18:08Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-09-16T04:14:07Z | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/L3.1-SuperNovabliterated-8B-model_stock-i1-GGUF | mradermacher | 2024-09-16T04:02:36Z | 59 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
]
| null | 2024-09-15T18:48:47Z | ---
base_model: DreadPoor/L3.1-SuperNovabliterated-8B-model_stock
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DreadPoor/L3.1-SuperNovabliterated-8B-model_stock
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3.1-SuperNovabliterated-8B-model_stock-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3.1-SuperNovabliterated-8B-model_stock-i1-GGUF/resolve/main/L3.1-SuperNovabliterated-8B-model_stock.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3.1-SuperNovabliterated-8B-model_stock-i1-GGUF/resolve/main/L3.1-SuperNovabliterated-8B-model_stock.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3.1-SuperNovabliterated-8B-model_stock-i1-GGUF/resolve/main/L3.1-SuperNovabliterated-8B-model_stock.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-SuperNovabliterated-8B-model_stock-i1-GGUF/resolve/main/L3.1-SuperNovabliterated-8B-model_stock.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-SuperNovabliterated-8B-model_stock-i1-GGUF/resolve/main/L3.1-SuperNovabliterated-8B-model_stock.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-SuperNovabliterated-8B-model_stock-i1-GGUF/resolve/main/L3.1-SuperNovabliterated-8B-model_stock.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-SuperNovabliterated-8B-model_stock-i1-GGUF/resolve/main/L3.1-SuperNovabliterated-8B-model_stock.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-SuperNovabliterated-8B-model_stock-i1-GGUF/resolve/main/L3.1-SuperNovabliterated-8B-model_stock.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-SuperNovabliterated-8B-model_stock-i1-GGUF/resolve/main/L3.1-SuperNovabliterated-8B-model_stock.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-SuperNovabliterated-8B-model_stock-i1-GGUF/resolve/main/L3.1-SuperNovabliterated-8B-model_stock.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-SuperNovabliterated-8B-model_stock-i1-GGUF/resolve/main/L3.1-SuperNovabliterated-8B-model_stock.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3.1-SuperNovabliterated-8B-model_stock-i1-GGUF/resolve/main/L3.1-SuperNovabliterated-8B-model_stock.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-SuperNovabliterated-8B-model_stock-i1-GGUF/resolve/main/L3.1-SuperNovabliterated-8B-model_stock.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-SuperNovabliterated-8B-model_stock-i1-GGUF/resolve/main/L3.1-SuperNovabliterated-8B-model_stock.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-SuperNovabliterated-8B-model_stock-i1-GGUF/resolve/main/L3.1-SuperNovabliterated-8B-model_stock.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-SuperNovabliterated-8B-model_stock-i1-GGUF/resolve/main/L3.1-SuperNovabliterated-8B-model_stock.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-SuperNovabliterated-8B-model_stock-i1-GGUF/resolve/main/L3.1-SuperNovabliterated-8B-model_stock.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-SuperNovabliterated-8B-model_stock-i1-GGUF/resolve/main/L3.1-SuperNovabliterated-8B-model_stock.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3.1-SuperNovabliterated-8B-model_stock-i1-GGUF/resolve/main/L3.1-SuperNovabliterated-8B-model_stock.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-SuperNovabliterated-8B-model_stock-i1-GGUF/resolve/main/L3.1-SuperNovabliterated-8B-model_stock.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-SuperNovabliterated-8B-model_stock-i1-GGUF/resolve/main/L3.1-SuperNovabliterated-8B-model_stock.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
lodestone-horizon/chromafur-alpha | lodestone-horizon | 2024-09-16T04:02:17Z | 102 | 7 | null | [
"safetensors",
"gguf",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:quantized:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| null | 2024-09-07T03:21:09Z | ---
license: other
base_model: black-forest-labs/FLUX.1-dev
---
## **🚨 IMPORTANT - PLEASE READ EVERYTHING 🚨**
This model is an early experimental model of our future planned Flux model. It it NOT representative of a final product and has not undergone the necessary training to be considered so. Many tags have not been included within the model for the purpose of this alpha. You may experience certain problems relating to prompt cohesion, text legibility, species understanding, anatomy, style, and backgrounds. LoRA’s that have been trained on this model may or may not work in future iterations of the model. **ANYTHING WITHIN THE MODEL IS SUBJECT TO CHANGE.**
## Join Our Discord Server!
We have a Discord server where we provide updates on our models, take feedback relating to our models and generally enjoy a good chat. We’d love to see you there!
https://discord.gg/3GZHQTEEJq
## Recommended Settings
For optimal results with our model, we recommend the following settings:
- Resolution: 1024x1024
- Steps: 15-30
- CFG: 1 (Disabled) or 2-4
- Guidance: 4
- **CFG Skip Steps: 4**
The choice between CFG 1 and CFG 2-4 depends on your specific needs:
**CFG 2-4 advantages:**
- Produces more coherent text, even with lower step counts
- Can generate more detailed images or results closer to base Flux
**CFG 1 advantage:**
- Faster generation times (about half the time of CFG 2-4)
We recommend experimenting with both to find your preference. If using CFG 1, you may need to increase the step count for more coherent results.
When using CFG 2-4, it's crucial to skip steps. We suggest skipping 4 steps at 15 total steps. If your generations appear very blurry and blue, try increasing the number of skipped steps. Our ComfyUI workflow will handle this automatically.
## Usage Instructions
Please download the ComfyUI workflow from the link below:
https://huggingface.co/lodestone-horizon/furry-flux-pilot-alpha/blob/main/comfy-workflow.json
1. Click **</> raw**
2. CTRL + S
3. Save to a location that you know.
Then drag and drop this workflow into your up-to-date ComfyUI interface.
If you do not have ComfyUI, please follow these installation instructions:
https://github.com/comfyanonymous/ComfyUI?tab=readme-ov-file#installing
**CLIP and "UNET"** Once you have loaded the workflow, you must select the appropriate CLIP and “UNET” model. The model should be placed in `(ComfyUI Folder)/models/UNET` and the CLIP model should be placed inside of `(ComfyUI Folder)/models/CLIP`
**VAE:** You must also download the VAE for FLUX.1 Dev from here: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/ae.safetensors. You do not need to do this if you have already downloaded the FLUX.1 Dev VAE. The VAE should be placed in `(ComfyUI Folder)/models/VAE`
**T5:** You must also download T5 from here: https://huggingface.co/comfyanonymous/flux_text_encoders. You do not need to do this if you have already downloaded T5. T5 should be placed in `(ComfyUI Folder)/models/CLIP`
If you have placed these files after the interface has already been loaded, please refresh the page, otherwise they will not appear.
Both the “UNET” + CLIP was trained with this model, so it’s vital to use both of them for appropriate performance. In the workflow, you should select your model files here.
On the left side of the workflow, you’ll see the settings for your generation. At the top in the green box, you’re able to input your positive prompt. The red box is for negatives.
Please read **Prompting & Tips** below to learn more about how to prompt the model effectively.
## Model Info
Chromafur Alpha is an experimental furry AI model built on Black Forest Labs' FLUX.1-dev foundational model. Created by Horizon Team, it's an initial experiment for a larger model planned in the near future. Chromafur Alpha specialises in generating high-quality SFW and NSFW furry artwork as it has been trained on a focused dataset.
The model uses a custom in-house captioning model designed to describe furry artwork naturally, avoiding overly flowery language. It also incorporates both existing and AI-generated tags, allowing it to respond well to prompts while maintaining Flux's strong natural language understanding.
Notably, Chromafur Alpha has demonstrated the ability to use both tags and captions for image generation, whilst excelling at complicated prompts with extensive natural language and tags.
## Strengths & Weaknesses
During our evaluation of the model by ourself and the community, we find the following strengths and weaknesses in the model.
### Strengths
- High-level understanding of natural language, even with complex prompts involving a large variety of objects, fur, colors and more.
- Flexible natural language style, accommodating complex and simple English..
- Proficient at creating visually appealing anthropomorphic characters.
- Capable of generating genitalia and other NSFW elements.
- Ability to work with various prompt lengths, from concise to detailed.
### Weaknesses
- Limited capability for duo+ compositions, as it's not specifically trained for this.
- Limited ability to modify style of generation, there tends to be one singular ‘house style’.
- Image quality tends to degrade with aspect ratios beyond 1:1.
- Tendency to accidentally include genitalia on characters when not specifically requested.
- Characters often default to nude unless specific clothing items are prompted.
- Reduced text generation comprehension compared to base Flux.
- Tendency to include human figures in backgrounds, even when only furry characters are requested.
- Occasional issues with image quality, resulting in graininess or blurriness.
## Prompting & Tips
It's strongly recommended to use natural language in your generations and to utilise our ComfyUI workflow. Here's how to prompt effectively with our workflow:
### ComfyUI Workflow Prompting
1. **CLIP Box (Top)**
- **Format:** image tags: tag1, tag2, tag3
- Use this for listing image tags, as you would’ve done with SD 1.5 or SDXL.
2. **T5 Box (Bottom)**
- **Format:** image tags: (same as your CLIP tags) image captions: "your caption here"
- Include both tags and a natural language caption
### Using Negatives
- Negatives are supported, following the same tagging and prompting guidelines
- **Important:** To use negatives, you **must** use a CFG setting other than CFG 1
- Negatives will not work with CFG 1
### Prompting Style
- Describe your scene in a natural manner
- Refer to our provided example images to understand the prompting style
- Use natural language for detailed descriptions
By following these guidelines, you can effectively leverage our model's capabilities and achieve better results with your generations.
## Credits
This model was developed by the Horizon Team, a team dedicated to creating high-quality furry AI models.
- Lead Research: Lodestone Rock
- Data: Bananapuncakes
- Research: Theos
- Research: Clybius
- Technical Assistance: Dogarrowtype
**Extra Funded Provided By:**
- Minotoro @ [frosting.ai](http://frosting.ai)
- Anonymous Donor(s) x 2
We'd like to express our heartfelt gratitude to our wonderful Supporters who made this model possible. Golden Supporters received early access to this model as a thank you for their contributions ❤️
**Golden Supporter:**
- **3eve3**
- **Gushousekai195**
- **Kadah**
- **Mobbun**
- **Robke223**
- **TheGreatSako**
- **TheUnamusedFox**
**Serious Supporter:**
- IlllIs
**Supporter**
- degreeze
- Tails8521
- Anonymous Donor
If you’d like to support us, you can subscribe to us via SubscribeStar here. A variety of tiers are available.
https://subscribestar.adult/lodestone-rock
We also take donations via Ko-fi.
https://ko-fi.com/lodestonerock
Please contact us if you wish to donate via other means or can provide computational hardware.
### Special Thanks
- IlllIs
- Mo
- GodEmperorHydra
- Furry Diffusion Moderation Team
- You!
Have a good one! \\o/ |
tlam25/aroma-aspect-beer-classifier-sent-level | tlam25 | 2024-09-16T03:59:21Z | 174 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-09-09T14:12:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
coscotuff/SLFT_Trials_5 | coscotuff | 2024-09-16T03:57:16Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-09-16T03:04:22Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yhshin1020/smollm-dpo | yhshin1020 | 2024-09-16T03:55:10Z | 174 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-09-16T03:54:54Z | ---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
phuongntc/reward_vietbase_sum_4000 | phuongntc | 2024-09-16T03:52:36Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text2text-generation | 2024-09-16T03:51:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
p0x0q-dev/bge-m3-sparse-experimental | p0x0q-dev | 2024-09-16T03:40:56Z | 8 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"onnx",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"license:mit",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-09-13T05:18:04Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
license: mit
---
# Experimental Sparse Vector Repository
This repository is a fork of the [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3) repository, aimed at creating sparse vectors. It is an experimental project based on the BGE-M3 model, which is known for its versatility in Multi-Functionality, Multi-Linguality, and Multi-Granularity.
# Note
Currently, sparse vectors can only be created when deploying with Inference Endpoints. (Custom Task)

|
minchyeom/ThinkingMistral-gen | minchyeom | 2024-09-16T03:35:46Z | 7 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-09-16T03:22:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits