modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
cezeozue/xlm-roberta-base-finetuned-panx-en
|
cezeozue
| 2024-01-05T18:51:26Z
| 8
| 0
|
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-05T18:50:04Z
|
---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4165
- F1: 0.6767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1731 | 1.0 | 50 | 0.6069 | 0.5575 |
| 0.5376 | 2.0 | 100 | 0.4192 | 0.6433 |
| 0.4105 | 3.0 | 150 | 0.4165 | 0.6767 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
mtc/meta-llama-Llama-2-7b-hf-pubmed-summarization-5000-qlora-4bit
|
mtc
| 2024-01-05T18:48:27Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-01-05T18:47:46Z
|
---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
chargoddard/average-dolphin-8x7B
|
chargoddard
| 2024-01-05T18:41:54Z
| 62
| 1
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"base_model:cognitivecomputations/dolphin-2.5-mixtral-8x7b",
"base_model:merge:cognitivecomputations/dolphin-2.5-mixtral-8x7b",
"base_model:cognitivecomputations/dolphin-2.6-mixtral-8x7b",
"base_model:merge:cognitivecomputations/dolphin-2.6-mixtral-8x7b",
"base_model:cognitivecomputations/dolphin-2.7-mixtral-8x7b",
"base_model:merge:cognitivecomputations/dolphin-2.7-mixtral-8x7b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-04T02:39:11Z
|
---
base_model:
- cognitivecomputations/dolphin-2.7-mixtral-8x7b
- cognitivecomputations/dolphin-2.5-mixtral-8x7b
- cognitivecomputations/dolphin-2.6-mixtral-8x7b
tags:
- mergekit
- merge
license: apache-2.0
---
# average-dolphin-8x7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [cognitivecomputations/dolphin-2.7-mixtral-8x7b](https://huggingface.co/cognitivecomputations/dolphin-2.7-mixtral-8x7b)
* [cognitivecomputations/dolphin-2.5-mixtral-8x7b](https://huggingface.co/cognitivecomputations/dolphin-2.5-mixtral-8x7b)
* [cognitivecomputations/dolphin-2.6-mixtral-8x7b](https://huggingface.co/cognitivecomputations/dolphin-2.6-mixtral-8x7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: linear
dtype: bfloat16
models: # just your average dolphin
- model: cognitivecomputations/dolphin-2.7-mixtral-8x7b
parameters:
weight: 0.5
- model: cognitivecomputations/dolphin-2.6-mixtral-8x7b
parameters:
weight: 0.3
- model: cognitivecomputations/dolphin-2.5-mixtral-8x7b
parameters:
weight: 0.2
parameters:
normalize: true
```
|
TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF
|
TheBloke
| 2024-01-05T18:27:05Z
| 339
| 11
|
transformers
|
[
"transformers",
"gguf",
"mixtral",
"text-generation",
"en",
"dataset:lemonilia/LimaRP",
"base_model:Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss",
"base_model:quantized:Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss",
"license:apache-2.0",
"region:us",
"conversational"
] |
text-generation
| 2024-01-05T17:49:39Z
|
---
base_model: Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss
datasets:
- lemonilia/LimaRP
inference: false
language:
- en
library_name: transformers
license: apache-2.0
model_creator: Doctor Shotgun
model_name: Mixtral 8X7B Instruct v0.1 LimaRP ZLoss
model_type: mixtral
pipeline_tag: text-generation
prompt_template: '### Instruction:
{system_message}
### Input:
{prompt}
### Response:
'
quantized_by: TheBloke
tags:
- mixtral
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Mixtral 8X7B Instruct v0.1 LimaRP ZLoss - GGUF
- Model creator: [Doctor Shotgun](https://huggingface.co/Doctor-Shotgun)
- Original model: [Mixtral 8X7B Instruct v0.1 LimaRP ZLoss](https://huggingface.co/Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Doctor Shotgun's Mixtral 8X7B Instruct v0.1 LimaRP ZLoss](https://huggingface.co/Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF)
* [Doctor Shotgun's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Doctor-Shotgun/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Instruction-Input-Response
```
### Instruction:
{system_message}
### Input:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [mixtral-8x7b-instruct-v0.1-limarp-zloss.Q2_K.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/blob/main/mixtral-8x7b-instruct-v0.1-limarp-zloss.Q2_K.gguf) | Q2_K | 2 | 15.64 GB| 18.14 GB | smallest, significant quality loss - not recommended for most purposes |
| [mixtral-8x7b-instruct-v0.1-limarp-zloss.Q3_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/blob/main/mixtral-8x7b-instruct-v0.1-limarp-zloss.Q3_K_M.gguf) | Q3_K_M | 3 | 20.36 GB| 22.86 GB | very small, high quality loss |
| [mixtral-8x7b-instruct-v0.1-limarp-zloss.Q4_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/blob/main/mixtral-8x7b-instruct-v0.1-limarp-zloss.Q4_0.gguf) | Q4_0 | 4 | 26.44 GB| 28.94 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mixtral-8x7b-instruct-v0.1-limarp-zloss.Q4_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/blob/main/mixtral-8x7b-instruct-v0.1-limarp-zloss.Q4_K_M.gguf) | Q4_K_M | 4 | 26.44 GB| 28.94 GB | medium, balanced quality - recommended |
| [mixtral-8x7b-instruct-v0.1-limarp-zloss.Q5_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/blob/main/mixtral-8x7b-instruct-v0.1-limarp-zloss.Q5_0.gguf) | Q5_0 | 5 | 32.23 GB| 34.73 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [mixtral-8x7b-instruct-v0.1-limarp-zloss.Q5_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/blob/main/mixtral-8x7b-instruct-v0.1-limarp-zloss.Q5_K_M.gguf) | Q5_K_M | 5 | 32.23 GB| 34.73 GB | large, very low quality loss - recommended |
| [mixtral-8x7b-instruct-v0.1-limarp-zloss.Q6_K.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/blob/main/mixtral-8x7b-instruct-v0.1-limarp-zloss.Q6_K.gguf) | Q6_K | 6 | 38.38 GB| 40.88 GB | very large, extremely low quality loss |
| [mixtral-8x7b-instruct-v0.1-limarp-zloss.Q8_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF/blob/main/mixtral-8x7b-instruct-v0.1-limarp-zloss.Q8_0.gguf) | Q8_0 | 8 | 49.63 GB| 52.13 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF and below it, a specific filename to download, such as: mixtral-8x7b-instruct-v0.1-limarp-zloss.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF mixtral-8x7b-instruct-v0.1-limarp-zloss.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss-GGUF mixtral-8x7b-instruct-v0.1-limarp-zloss.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m mixtral-8x7b-instruct-v0.1-limarp-zloss.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction:\n{system_message}\n\n### Input:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./mixtral-8x7b-instruct-v0.1-limarp-zloss.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"### Instruction:\n{system_message}\n\n### Input:\n{prompt}\n\n### Response:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./mixtral-8x7b-instruct-v0.1-limarp-zloss.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Doctor Shotgun's Mixtral 8X7B Instruct v0.1 LimaRP ZLoss
# Mixtral-8x7B-Instruct-v0.1-LimaRP-ZLoss
Experimental model, using a limarp qlora trained at 10k ctx length (greater than size of the longest limarp sample when tokenized via mistral's tokenizer) on [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) using [Charles Goddard](https://huggingface.co/chargoddard)'s ZLoss and Megablocks-based fork of transformers, and then fused to [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) at 0.5 weight.
Would try with temp ~1.5-2 and min-p of ~0.03-0.05 since mixtral does appear to be highly confident on its responses and can enter repetition loops after several thousand tokens of responses.
[Peft Adapter](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora)
## Usage:
The intended prompt format is the Alpaca instruction format of LimaRP v3:
```
### Instruction:
Character's Persona: {bot character description}
User's Persona: {user character description}
Scenario: {what happens in the story}
Play the role of Character. Taking the above information into consideration, you must engage in a roleplaying chat with User below this line. Do not write dialogues and narration for User.
### Input:
User: {utterance}
### Response:
Character: {utterance}
### Input:
User: {utterance}
### Response:
Character: {utterance}
(etc.)
```
## Message length control
Due to the inclusion of LimaRP v3, it is possible to append a length modifier to the response instruction sequence, like this:
```
### Input
User: {utterance}
### Response: (length = medium)
Character: {utterance}
```
This has an immediately noticeable effect on bot responses. The available lengths are: `micro, tiny, short, medium, long, massive, huge, enormous, humongous, unlimited`. The recommended starting length is `medium`. Keep in mind that the AI may ramble or impersonate the user with very long messages.
## Bias, Risks, and Limitations
The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.
## Training Details
This model is a merge. Please refer to the link repositories of the merged models for details.
<!-- original-model-card end -->
|
TheBloke/Beyonder-4x7B-v2-GPTQ
|
TheBloke
| 2024-01-05T18:26:44Z
| 25
| 6
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"moe",
"base_model:mlabonne/Beyonder-4x7B-v2",
"base_model:quantized:mlabonne/Beyonder-4x7B-v2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-01-05T16:50:26Z
|
---
base_model: mlabonne/Beyonder-4x7B-v2
inference: false
license: apache-2.0
model_creator: Maxime Labonne
model_name: Beyonder 4X7B v2
model_type: mixtral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- moe
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Beyonder 4X7B v2 - GPTQ
- Model creator: [Maxime Labonne](https://huggingface.co/mlabonne)
- Original model: [Beyonder 4X7B v2](https://huggingface.co/mlabonne/Beyonder-4x7B-v2)
<!-- description start -->
# Description
This repo contains GPTQ model files for [Maxime Labonne's Beyonder 4X7B v2](https://huggingface.co/mlabonne/Beyonder-4x7B-v2).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Beyonder-4x7B-v2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Beyonder-4x7B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Beyonder-4x7B-v2-GGUF)
* [Maxime Labonne's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mlabonne/Beyonder-4x7B-v2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Beyonder-4x7B-v2-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 12.51 GB | No | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Beyonder-4x7B-v2-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 12.96 GB | No | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Beyonder-4x7B-v2-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 14.36 GB | No | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Beyonder-4x7B-v2-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 9.95 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Beyonder-4x7B-v2-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 24.45 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-3bit-32g-actorder_True](https://huggingface.co/TheBloke/Beyonder-4x7B-v2-GPTQ/tree/gptq-3bit-32g-actorder_True) | 3 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 11.28 GB | No | 3-bit, with group size 64g and act-order. Highest quality 3-bit option. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Beyonder-4x7B-v2-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 25.00 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/Beyonder-4x7B-v2-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/Beyonder-4x7B-v2-GPTQ:gptq-4bit-128g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `Beyonder-4x7B-v2-GPTQ`:
```shell
mkdir Beyonder-4x7B-v2-GPTQ
huggingface-cli download TheBloke/Beyonder-4x7B-v2-GPTQ --local-dir Beyonder-4x7B-v2-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Beyonder-4x7B-v2-GPTQ
huggingface-cli download TheBloke/Beyonder-4x7B-v2-GPTQ --revision gptq-4bit-128g-actorder_True --local-dir Beyonder-4x7B-v2-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir Beyonder-4x7B-v2-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Beyonder-4x7B-v2-GPTQ --local-dir Beyonder-4x7B-v2-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-128g-actorder_True https://huggingface.co/TheBloke/Beyonder-4x7B-v2-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Beyonder-4x7B-v2-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/Beyonder-4x7B-v2-GPTQ:gptq-4bit-128g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Beyonder-4x7B-v2-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Beyonder-4x7B-v2-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(
prompt_template,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/Beyonder-4x7B-v2-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Maxime Labonne's Beyonder 4X7B v2

# Beyonder-4x7B-v2
This model is a Mixture of Experts (MoE) made with [mergekit](https://github.com/cg123/mergekit) (mixtral branch). It uses the following base models:
* [openchat/openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210)
* [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
* [maywell/PiVoT-0.1-Starling-LM-RP](https://huggingface.co/maywell/PiVoT-0.1-Starling-LM-RP)
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
## 🏆 Evaluation
Beyonder-4x7B-v2 is competitive with Mixtral-8x7B-Instruct-v0.1 on the Open LLM Leaderboard, while only having 4 experts instead of 8.

It also displays a significant improvement over the individual experts.

It also performs very well compared to other models on Nous benchmark suite. It's almost as good as the best Yi-34B fine-tune, which is a much bigger model: 24.2B parameters + only two experts are selected during inference (so ~12B) vs. 34B param.
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|--------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[**Beyonder-4x7B-v2**](https://huggingface.co/shadowml/Beyonder-4x7B-v2)| **45.29**| **75.95**| <u>**60.86**</u>| **46.4**| **57.13**|
|[NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)| 43.67| 73.24| 55.37| 41.76| 53.51|
|[OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)| 42.75| 72.99| 52.99| 40.94| 52.42|
|[Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B)| 47.79| 74.69| 55.92| 44.84| 55.81|
|[Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B)| <u>50.27</u>| <u>76.00</u>| 60.34| <u>46.69</u>| <u>58.33</u>|
### AGIEval
| Task |Version| Metric |Value| |Stderr|
|------------------------------|------:|--------|----:|---|-----:|
|agieval_aqua_rat | 0|acc |23.62|± | 2.67|
| | |acc_norm|23.62|± | 2.67|
|agieval_logiqa_en | 0|acc |41.47|± | 1.93|
| | |acc_norm|43.01|± | 1.94|
|agieval_lsat_ar | 0|acc |23.04|± | 2.78|
| | |acc_norm|23.48|± | 2.80|
|agieval_lsat_lr | 0|acc |51.57|± | 2.22|
| | |acc_norm|52.94|± | 2.21|
|agieval_lsat_rc | 0|acc |64.31|± | 2.93|
| | |acc_norm|64.68|± | 2.92|
|agieval_sat_en | 0|acc |79.13|± | 2.84|
| | |acc_norm|79.13|± | 2.84|
|agieval_sat_en_without_passage| 0|acc |43.20|± | 3.46|
| | |acc_norm|43.20|± | 3.46|
|agieval_sat_math | 0|acc |34.55|± | 3.21|
| | |acc_norm|32.27|± | 3.16|
### GPT4All
| Task |Version| Metric |Value| |Stderr|
|-------------|------:|--------|----:|---|-----:|
|arc_challenge| 0|acc |61.86|± | 1.42|
| | |acc_norm|64.51|± | 1.40|
|arc_easy | 0|acc |85.06|± | 0.73|
| | |acc_norm|82.45|± | 0.78|
|boolq | 1|acc |88.35|± | 0.56|
|hellaswag | 0|acc |68.04|± | 0.47|
| | |acc_norm|85.12|± | 0.36|
|openbookqa | 0|acc |37.80|± | 2.17|
| | |acc_norm|48.60|± | 2.24|
|piqa | 0|acc |83.08|± | 0.87|
| | |acc_norm|83.95|± | 0.86|
|winogrande | 0|acc |78.69|± | 1.15|
### TruthfulQA
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |44.55|± | 1.74|
| | |mc2 |60.86|± | 1.57|
### Bigbench
| Task |Version| Metric |Value| |Stderr|
|------------------------------------------------|------:|---------------------|----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|58.95|± | 3.58|
|bigbench_date_understanding | 0|multiple_choice_grade|66.40|± | 2.46|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|48.84|± | 3.12|
|bigbench_geometric_shapes | 0|multiple_choice_grade|22.56|± | 2.21|
| | |exact_str_match |13.37|± | 1.80|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|30.40|± | 2.06|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|20.57|± | 1.53|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|52.00|± | 2.89|
|bigbench_movie_recommendation | 0|multiple_choice_grade|44.40|± | 2.22|
|bigbench_navigate | 0|multiple_choice_grade|52.10|± | 1.58|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|69.75|± | 1.03|
|bigbench_ruin_names | 0|multiple_choice_grade|55.36|± | 2.35|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|23.65|± | 1.35|
|bigbench_snarks | 0|multiple_choice_grade|77.35|± | 3.12|
|bigbench_sports_understanding | 0|multiple_choice_grade|73.02|± | 1.41|
|bigbench_temporal_sequences | 0|multiple_choice_grade|46.80|± | 1.58|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|22.08|± | 1.17|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|19.03|± | 0.94|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|52.00|± | 2.89|
## 🧩 Configuration
```yaml
base_model: mlabonne/Marcoro14-7B-slerp
experts:
- source_model: openchat/openchat-3.5-1210
positive_prompts:
- "chat"
- "assistant"
- "tell me"
- "explain"
- source_model: beowolx/CodeNinja-1.0-OpenChat-7B
positive_prompts:
- "code"
- "python"
- "javascript"
- "programming"
- "algorithm"
- source_model: maywell/PiVoT-0.1-Starling-LM-RP
positive_prompts:
- "storywriting"
- "write"
- "scene"
- "story"
- "character"
- source_model: WizardLM/WizardMath-7B-V1.1
positive_prompts:
- "reason"
- "math"
- "mathematics"
- "solve"
- "count"
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/Beyonder-4x7B-v2"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
debapratimj/mistral-7b-finetuned-initial-100
|
debapratimj
| 2024-01-05T18:22:23Z
| 0
| 0
| null |
[
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-05T18:22:18Z
|
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
mtc/mistralai-Mistral-7B-v0.1-pubmed-summarization-5000-qlora-4bit
|
mtc
| 2024-01-05T18:20:18Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-01-05T18:19:36Z
|
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Dotunnorth/ppo-Pyramids
|
Dotunnorth
| 2024-01-05T18:15:44Z
| 1
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2024-01-05T18:15:07Z
|
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Dotunnorth/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AhmedAE/t5-arabic-text-summarization-finetuned-xsum
|
AhmedAE
| 2024-01-05T18:12:49Z
| 6
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:malmarjeh/t5-arabic-text-summarization",
"base_model:finetune:malmarjeh/t5-arabic-text-summarization",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-05T18:12:11Z
|
---
base_model: malmarjeh/t5-arabic-text-summarization
tags:
- generated_from_trainer
model-index:
- name: t5-arabic-text-summarization-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-arabic-text-summarization-finetuned-xsum
This model is a fine-tuned version of [malmarjeh/t5-arabic-text-summarization](https://huggingface.co/malmarjeh/t5-arabic-text-summarization) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.7847 | 1.0 | 840 | 3.4060 |
| 3.9577 | 2.0 | 1680 | 3.1634 |
| 3.6274 | 3.0 | 2520 | 2.9916 |
| 3.5127 | 4.0 | 3360 | 2.8185 |
| 3.324 | 5.0 | 4200 | 2.7196 |
| 3.2254 | 6.0 | 5040 | 2.6812 |
| 3.2065 | 7.0 | 5880 | 2.6396 |
| 3.1036 | 8.0 | 6720 | 2.5930 |
| 3.0984 | 9.0 | 7560 | 2.5850 |
| 2.9747 | 10.0 | 8400 | 2.5723 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.0+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Blofeld/autotrain-xc3z6-xnfb8
|
Blofeld
| 2024-01-05T18:06:33Z
| 0
| 0
| null |
[
"tensorboard",
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-05T18:06:29Z
|
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
moteloumka/path-to-save-model
|
moteloumka
| 2024-01-05T18:06:31Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2024-01-05T16:12:11Z
|
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
instance_prompt: a photo of a white 50 man
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
inference: true
---
# DreamBooth - moteloumka/path-to-save-model
This is a dreambooth model derived from CompVis/stable-diffusion-v1-4. The weights were trained on a photo of a white 50 man using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
komenge/Taxi-v3
|
komenge
| 2024-01-05T18:03:34Z
| 0
| 0
| null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-05T18:03:05Z
|
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="komenge/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ThuyNT03/KLTN_COQE_viT5_total_SAPOL_test_RS_SE
|
ThuyNT03
| 2024-01-05T18:03:09Z
| 7
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"base_model:finetune:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-05T16:44:07Z
|
---
license: mit
base_model: VietAI/vit5-large
tags:
- generated_from_trainer
model-index:
- name: KLTN_COQE_viT5_total_SAPOL_test_RS_SE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# KLTN_COQE_viT5_total_SAPOL_test_RS_SE
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.15.0
|
Lanxi24/hd_ff
|
Lanxi24
| 2024-01-05T18:00:52Z
| 1
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:openrail",
"region:us"
] |
text-to-image
| 2024-01-05T17:58:11Z
|
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/053d08ac64d012c34db8de3e185682f3.jpg
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: null
license: openrail
---
# lora1
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Lanxi24/hd_ff/tree/main) them in the Files & versions tab.
|
ostapeno/newt_adaNeo1B_quarel_heres_a_story_sbs0.5_svdemb_sgd_full_ft_finegrained
|
ostapeno
| 2024-01-05T17:52:03Z
| 0
| 0
| null |
[
"region:us"
] | null | 2024-01-05T14:31:02Z
|
Number of experts present in the library: 9
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| quarel_heres_a_story_v4 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora |
| quarel_heres_a_story_v5 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora |
| quarel_heres_a_story_v2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora |
| quarel_heres_a_story_v3 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora |
| quarel_heres_a_story_v1 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora |
| quarel_heres_a_story_v7 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora |
| quarel_heres_a_story | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora |
| quarel_heres_a_story_v6 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora |
| quarel_heres_a_story_v8 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/quarel_heres_a_story | lora |
Last updated on: 2024-01-05 17:52:00+00:00
|
godmethium/distilhubert-finetuned-gtzan
|
godmethium
| 2024-01-05T17:47:38Z
| 5
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"base_model:ntu-spml/distilhubert",
"base_model:finetune:ntu-spml/distilhubert",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2024-01-05T15:37:45Z
|
---
license: apache-2.0
base_model: ntu-spml/distilhubert
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: distilhubert-finetuned-gtzan
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: GTZAN
type: marsyas/gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.85
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-finetuned-gtzan
This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5876
- Accuracy: 0.85
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9966 | 1.0 | 113 | 1.8989 | 0.55 |
| 1.2236 | 2.0 | 226 | 1.2805 | 0.6 |
| 0.8661 | 3.0 | 339 | 0.9010 | 0.74 |
| 0.6735 | 4.0 | 452 | 0.7560 | 0.78 |
| 0.466 | 5.0 | 565 | 0.7585 | 0.76 |
| 0.333 | 6.0 | 678 | 0.6572 | 0.81 |
| 0.1739 | 7.0 | 791 | 0.6360 | 0.83 |
| 0.2277 | 8.0 | 904 | 0.5453 | 0.81 |
| 0.1714 | 9.0 | 1017 | 0.5850 | 0.83 |
| 0.0892 | 10.0 | 1130 | 0.5876 | 0.85 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
bboninsss/CiceroNER.v1
|
bboninsss
| 2024-01-05T17:37:14Z
| 6
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"it",
"base_model:dbmdz/bert-base-italian-cased",
"base_model:finetune:dbmdz/bert-base-italian-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-05T14:19:17Z
|
---
license: mit
base_model: dbmdz/bert-base-italian-cased
language:
- it
model-index:
- name: italian_ner
results: []
widget:
- text: >-
REPUBBLICA ITALIANA IN NOME DEL POPOLO ITALIANO Il tribunale di Roma In
persona del Giudice Unico Dr. Mario Rossi ha emesso la seguente SENTENZA
Nella causa civile di 1 grado iscritta al N. 00100 del ruolo
generale dell’anno 2015, posta in deliberazione all’udienza dell’1 Gennaio
2016, e vertente Tra Giuseppe Bianchi, (C.F.BNCGPP80A01H501E) elettivamente domiciliato in Roma,
Via Termini 19, presso lo Studio dell’Avv. Antonio Verdi, che lo rappresenta
e difende per procura in calce alla comparsa di costituzione di nuovo
difensore OPPONENTE E Azienda panettieri S.p.A.
metrics:
- accuracy
- f1
library_name: transformers
---
# Modello di Riconoscimento di Entità Denominate (NER) per Sentenze Italiane - Descrizione
Questo modello di Ner è stato creato per l'analisi di entità denominate in sentenze emesse dalle corti italiane. Riconosce entità come persone, luoghi, organizzazioni, importi, date, codici fiscali; riconosce i titoli avv e dott e identifica le citazioni di leggi, sentenze e procedimenti
## Dettagli
Nello specifico riconosce:
- **PERSONA
- **ORGANIZZAZIONE
- **LUOGO
- **DATA
- **INDIRIZZO
- **IMPORTO
- **LEGGE
- **AVV
- **DOTT
- **CODFISC
- **NUMERO
- **IDSENT
- **IDPROC
## How to Get Started with the Model
Puoi usare il modello utilizzando la *pipeline* for NER.
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("bboninsss/CiceroNER.v1")
model = AutoModelForTokenClassification.from_pretrained("bboninsss/CiceroNER.v1")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Mario Rossi è nato a Roma"
ner_results = nlp(example)
print(ner_results)
```
- **Developed by:** Marco Bonina
|
msivanes/summarization
|
msivanes
| 2024-01-05T17:25:18Z
| 11
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-05T17:24:57Z
|
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarization
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5596
- Rouge1: 0.2002
- Rouge2: 0.0988
- Rougel: 0.1673
- Rougelsum: 0.1672
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 124 | 2.6496 | 0.1597 | 0.0618 | 0.1338 | 0.1337 | 19.0 |
| No log | 2.0 | 248 | 2.5953 | 0.1968 | 0.0946 | 0.1651 | 0.1653 | 19.0 |
| No log | 3.0 | 372 | 2.5667 | 0.2006 | 0.0989 | 0.1678 | 0.1677 | 19.0 |
| No log | 4.0 | 496 | 2.5596 | 0.2002 | 0.0988 | 0.1673 | 0.1672 | 19.0 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
debapratimj/mistral-7b-finetuned-initial-30
|
debapratimj
| 2024-01-05T17:23:28Z
| 0
| 0
| null |
[
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-05T17:23:23Z
|
---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
```
|
Doctor-Shotgun/Norobara-ZLoss-8x7B
|
Doctor-Shotgun
| 2024-01-05T17:20:29Z
| 17
| 5
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"dataset:LDJnr/Capybara",
"dataset:unalignment/toxic-dpo-v0.1",
"dataset:LDJnr/Verified-Camel",
"dataset:HuggingFaceH4/no_robots",
"dataset:Doctor-Shotgun/no-robots-sharegpt",
"dataset:Doctor-Shotgun/capybara-sharegpt",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-04T17:29:35Z
|
---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- mixtral
datasets:
- LDJnr/Capybara
- unalignment/toxic-dpo-v0.1
- LDJnr/Verified-Camel
- HuggingFaceH4/no_robots
- Doctor-Shotgun/no-robots-sharegpt
- Doctor-Shotgun/capybara-sharegpt
---
# Norobara-ZLoss-8x7B
This is an experimental instruct-tuned [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)-based model trained using [Charles Goddard](https://huggingface.co/chargoddard)'s ZLoss and Megablocks-based fork of transformers.
It primarily uses the Capybara and No Robots datasets (thus the name). The goal was to create an uncensored general instruction following model, as well as test various loss implementations while we figure out how the heck to train Mixtral properly.
[Exl2 Quants](https://huggingface.co/royallab/Norobara-ZLoss-8x7B-exl2)
Quants courtesy of TheBloke:
- [GPTQ](https://huggingface.co/TheBloke/Norobara-ZLoss-8x7B-GPTQ)
- [GGUF](https://huggingface.co/TheBloke/Norobara-ZLoss-8x7B-GGUF)
- [AWQ](https://huggingface.co/TheBloke/Norobara-ZLoss-8x7B-AWQ)
Additional Exl2 Quants courtesy of LoneStriker:
- [2.4bpw](https://huggingface.co/LoneStriker/Norobara-ZLoss-8x7B-2.4bpw-h6-exl2)
- [3.0bpw](https://huggingface.co/LoneStriker/Norobara-ZLoss-8x7B-3.0bpw-h6-exl2)
- [3.5bpw](https://huggingface.co/LoneStriker/Norobara-ZLoss-8x7B-3.5bpw-h6-exl2)
- [3.75bpw](https://huggingface.co/LoneStriker/Norobara-ZLoss-8x7B-3.75bpw-h6-exl2)
- [4.0bpw](https://huggingface.co/LoneStriker/Norobara-ZLoss-8x7B-4.0bpw-h6-exl2)
- [5.0bpw](https://huggingface.co/LoneStriker/Norobara-ZLoss-8x7B-5.0bpw-h6-exl2)
- [6.0bpw](https://huggingface.co/LoneStriker/Norobara-ZLoss-8x7B-6.0bpw-h6-exl2)
## Usage:
The intended prompt format is a modified multi-turn Alpaca instruction format:
```
### Instruction:
{system prompt}
### Input:
{user message}
### Response:
{model response}
### Input:
{user message}
### Response:
{model response}
(etc.)
```
## Bias, Risks, and Limitations
The model will show biases present in the base model. No ethical alignment was applied to prevent the generation of toxic or harmful outputs (in fact the opposite, with examples from toxic-DPO included), so generate at your own risk.
## Training Details
This model was trained as a QLora adapter for 3 epochs using a single H100 GPU for around 13 hours.
|
Sahyus/roberta-large-squad2-finetuned-dtc
|
Sahyus
| 2024-01-05T17:19:22Z
| 4
| 0
|
transformers
|
[
"transformers",
"tf",
"roberta",
"question-answering",
"generated_from_keras_callback",
"base_model:deepset/roberta-large-squad2",
"base_model:finetune:deepset/roberta-large-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2024-01-05T17:18:39Z
|
---
license: cc-by-4.0
base_model: deepset/roberta-large-squad2
tags:
- generated_from_keras_callback
model-index:
- name: roberta-large-squad2-finetuned-dtc
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# roberta-large-squad2-finetuned-dtc
This model is a fine-tuned version of [deepset/roberta-large-squad2](https://huggingface.co/deepset/roberta-large-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.9389
- Train End Logits Loss: 1.1432
- Train Start Logits Loss: 0.7957
- Train End Logits Acc: 0.7392
- Train Start Logits Acc: 0.8093
- Validation Loss: 3.7259
- Validation End Logits Loss: 1.8885
- Validation Start Logits Loss: 1.8374
- Validation End Logits Acc: 0.6312
- Validation Start Logits Acc: 0.7221
- Epoch: 36
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2.4e-05, 'decay_steps': 21400, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.03}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Loss | Train Start Logits Loss | Train End Logits Acc | Train Start Logits Acc | Validation Loss | Validation End Logits Loss | Validation Start Logits Loss | Validation End Logits Acc | Validation Start Logits Acc | Epoch |
|:----------:|:---------------------:|:-----------------------:|:--------------------:|:----------------------:|:---------------:|:--------------------------:|:----------------------------:|:-------------------------:|:---------------------------:|:-----:|
| 5.8888 | 3.0592 | 2.8296 | 0.5456 | 0.5406 | 4.8715 | 2.6861 | 2.1854 | 0.6130 | 0.6182 | 0 |
| 5.0000 | 2.7063 | 2.2937 | 0.5809 | 0.5762 | 4.0680 | 2.3593 | 1.7087 | 0.6208 | 0.6000 | 1 |
| 4.7529 | 2.5952 | 2.1576 | 0.5929 | 0.5862 | 4.5767 | 2.7450 | 1.8317 | 0.6208 | 0.6156 | 2 |
| 4.6181 | 2.5511 | 2.0670 | 0.5984 | 0.5873 | 3.9828 | 2.4125 | 1.5703 | 0.6208 | 0.6052 | 3 |
| 4.4766 | 2.4920 | 1.9846 | 0.6019 | 0.5946 | 3.7862 | 2.2460 | 1.5402 | 0.6208 | 0.5922 | 4 |
| 4.5692 | 2.5720 | 1.9972 | 0.6081 | 0.6066 | 3.6069 | 2.1558 | 1.4511 | 0.6208 | 0.6052 | 5 |
| 4.3098 | 2.4382 | 1.8716 | 0.6016 | 0.5987 | 3.2741 | 1.9275 | 1.3466 | 0.6208 | 0.6364 | 6 |
| 3.8906 | 2.2240 | 1.6666 | 0.6165 | 0.6256 | 3.3856 | 1.9718 | 1.4138 | 0.6156 | 0.6052 | 7 |
| 3.7711 | 2.1773 | 1.5939 | 0.6154 | 0.6317 | 3.4381 | 1.7916 | 1.6465 | 0.6182 | 0.4805 | 8 |
| 3.6331 | 2.1149 | 1.5182 | 0.6177 | 0.6460 | 3.0055 | 1.6855 | 1.3200 | 0.5273 | 0.6338 | 9 |
| 3.4683 | 2.0212 | 1.4471 | 0.6168 | 0.6545 | 3.3422 | 1.7875 | 1.5547 | 0.4805 | 0.5325 | 10 |
| 3.3695 | 1.9567 | 1.4129 | 0.6183 | 0.6618 | 2.8283 | 1.5488 | 1.2795 | 0.5455 | 0.6286 | 11 |
| 3.3125 | 1.9344 | 1.3781 | 0.6215 | 0.6647 | 2.7086 | 1.5124 | 1.1962 | 0.5636 | 0.6338 | 12 |
| 3.2580 | 1.9282 | 1.3298 | 0.6390 | 0.6852 | 3.0502 | 1.7520 | 1.2982 | 0.6156 | 0.6623 | 13 |
| 3.2814 | 1.9478 | 1.3336 | 0.6294 | 0.6711 | 2.5437 | 1.4591 | 1.0846 | 0.5948 | 0.6727 | 14 |
| 3.1027 | 1.8305 | 1.2721 | 0.6370 | 0.6893 | 3.0537 | 1.6897 | 1.3640 | 0.5481 | 0.5922 | 15 |
| 2.7670 | 1.6628 | 1.1042 | 0.6583 | 0.7217 | 2.4372 | 1.3791 | 1.0581 | 0.6519 | 0.6961 | 16 |
| 2.7880 | 1.6975 | 1.0905 | 0.6583 | 0.7339 | 2.2441 | 1.2735 | 0.9706 | 0.7039 | 0.7299 | 17 |
| 2.7786 | 1.6524 | 1.1262 | 0.6606 | 0.7225 | 2.6408 | 1.4267 | 1.2141 | 0.6701 | 0.6831 | 18 |
| 2.4685 | 1.4862 | 0.9823 | 0.6741 | 0.7447 | 2.7726 | 1.5947 | 1.1779 | 0.6338 | 0.6909 | 19 |
| 2.4204 | 1.4523 | 0.9682 | 0.6814 | 0.7538 | 2.1115 | 1.1877 | 0.9238 | 0.7429 | 0.7714 | 20 |
| 2.2158 | 1.3472 | 0.8686 | 0.6939 | 0.7707 | 2.2647 | 1.2382 | 1.0266 | 0.7143 | 0.7532 | 21 |
| 2.0138 | 1.2461 | 0.7676 | 0.7109 | 0.7994 | 2.1425 | 1.1617 | 0.9808 | 0.7455 | 0.7558 | 22 |
| 2.0038 | 1.2585 | 0.7453 | 0.7129 | 0.8008 | 1.8952 | 0.9984 | 0.8968 | 0.7688 | 0.7558 | 23 |
| 1.8391 | 1.1600 | 0.6791 | 0.7231 | 0.8186 | 2.4242 | 1.3208 | 1.1034 | 0.7013 | 0.7039 | 24 |
| 1.7792 | 1.1060 | 0.6732 | 0.7389 | 0.8248 | 1.8800 | 1.0211 | 0.8588 | 0.7792 | 0.7818 | 25 |
| 1.6690 | 1.0636 | 0.6054 | 0.7462 | 0.8367 | 2.2503 | 1.2198 | 1.0305 | 0.7325 | 0.7506 | 26 |
| 1.6197 | 1.0327 | 0.5870 | 0.7591 | 0.8452 | 1.9393 | 0.9581 | 0.9812 | 0.7974 | 0.8052 | 27 |
| 1.5335 | 0.9795 | 0.5540 | 0.7652 | 0.8595 | 2.2046 | 1.1750 | 1.0296 | 0.7688 | 0.7870 | 28 |
| 1.4563 | 0.9314 | 0.5249 | 0.7751 | 0.8621 | 1.9638 | 1.0204 | 0.9434 | 0.7403 | 0.7792 | 29 |
| 1.3903 | 0.9049 | 0.4854 | 0.7772 | 0.8683 | 2.2657 | 1.1569 | 1.1088 | 0.7636 | 0.7896 | 30 |
| 1.3534 | 0.8813 | 0.4720 | 0.7859 | 0.8744 | 1.9620 | 0.9779 | 0.9840 | 0.7688 | 0.7740 | 31 |
| 1.4848 | 0.9444 | 0.5405 | 0.7684 | 0.8563 | 2.3368 | 1.1941 | 1.1427 | 0.7299 | 0.7688 | 32 |
| 1.5092 | 0.9534 | 0.5558 | 0.7550 | 0.8461 | 2.1233 | 1.0956 | 1.0277 | 0.7610 | 0.7740 | 33 |
| 1.4016 | 0.8789 | 0.5227 | 0.7751 | 0.8624 | 2.4886 | 1.2593 | 1.2294 | 0.7403 | 0.7844 | 34 |
| 1.8007 | 1.0509 | 0.7498 | 0.7520 | 0.8183 | 2.5730 | 1.3045 | 1.2686 | 0.7195 | 0.7481 | 35 |
| 1.9389 | 1.1432 | 0.7957 | 0.7392 | 0.8093 | 3.7259 | 1.8885 | 1.8374 | 0.6312 | 0.7221 | 36 |
### Framework versions
- Transformers 4.36.2
- TensorFlow 2.14.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
anismahmahi/G2-multilabel-setfit-model
|
anismahmahi
| 2024-01-05T17:15:42Z
| 8
| 0
|
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] |
text-classification
| 2024-01-05T17:15:21Z
|
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: 'It was a jihad training camp.
'
- text: 'Batten echoed that sentiment saying, “Tommy Robinson is a political prisoner."
'
- text: 'Failing to answer, Ellison tried to move from person to person, allowing
his minions to try and provide cover for him, similar to that of Maxine Waters,
but there was no "member''s only" elevator to flee into.
'
- text: 'More details about the horrid compound could be revealed Wednesday when the
five adults arrested from the site make their first court appearances.
'
- text: 'Black Death Warning: The Plague Is Impossible To Eradicate
'
pipeline_tag: text-classification
inference: false
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.5849056603773585
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A OneVsRestClassifier instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a OneVsRestClassifier instance
- **Maximum Sequence Length:** 512 tokens
<!-- - **Number of Classes:** Unknown -->
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.5849 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("anismahmahi/G2-multilabel-setfit-model")
# Run inference
preds = model("It was a jihad training camp.
")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 1 | 26.6518 | 129 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 10
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:--------:|:-------------:|:---------------:|
| 0.0006 | 1 | 0.3905 | - |
| 0.0275 | 50 | 0.2239 | - |
| 0.0550 | 100 | 0.2359 | - |
| 0.0826 | 150 | 0.2443 | - |
| 0.1101 | 200 | 0.2495 | - |
| 0.1376 | 250 | 0.2498 | - |
| 0.1651 | 300 | 0.116 | - |
| 0.1926 | 350 | 0.1672 | - |
| 0.2201 | 400 | 0.1281 | - |
| 0.2477 | 450 | 0.139 | - |
| 0.2752 | 500 | 0.0615 | - |
| 0.3027 | 550 | 0.0972 | - |
| 0.3302 | 600 | 0.0851 | - |
| 0.3577 | 650 | 0.1769 | - |
| 0.3853 | 700 | 0.1673 | - |
| 0.4128 | 750 | 0.0615 | - |
| 0.4403 | 800 | 0.1232 | - |
| 0.4678 | 850 | 0.0094 | - |
| 0.4953 | 900 | 0.0135 | - |
| 0.5228 | 950 | 0.0107 | - |
| 0.5504 | 1000 | 0.1137 | - |
| 0.5779 | 1050 | 0.0173 | - |
| 0.6054 | 1100 | 0.0573 | - |
| 0.6329 | 1150 | 0.0115 | - |
| 0.6604 | 1200 | 0.0374 | - |
| 0.6879 | 1250 | 0.0231 | - |
| 0.7155 | 1300 | 0.0392 | - |
| 0.7430 | 1350 | 0.0754 | - |
| 0.7705 | 1400 | 0.007 | - |
| 0.7980 | 1450 | 0.0138 | - |
| 0.8255 | 1500 | 0.0569 | - |
| 0.8531 | 1550 | 0.0971 | - |
| 0.8806 | 1600 | 0.1052 | - |
| 0.9081 | 1650 | 0.0084 | - |
| 0.9356 | 1700 | 0.0859 | - |
| 0.9631 | 1750 | 0.0081 | - |
| 0.9906 | 1800 | 0.0362 | - |
| 1.0 | 1817 | - | 0.2354 |
| 1.0182 | 1850 | 0.0429 | - |
| 1.0457 | 1900 | 0.056 | - |
| 1.0732 | 1950 | 0.0098 | - |
| 1.1007 | 2000 | 0.002 | - |
| 1.1282 | 2050 | 0.0892 | - |
| 1.1558 | 2100 | 0.0557 | - |
| 1.1833 | 2150 | 0.001 | - |
| 1.2108 | 2200 | 0.0125 | - |
| 1.2383 | 2250 | 0.0152 | - |
| 1.2658 | 2300 | 0.0202 | - |
| 1.2933 | 2350 | 0.0593 | - |
| 1.3209 | 2400 | 0.007 | - |
| 1.3484 | 2450 | 0.014 | - |
| 1.3759 | 2500 | 0.003 | - |
| 1.4034 | 2550 | 0.0012 | - |
| 1.4309 | 2600 | 0.0139 | - |
| 1.4584 | 2650 | 0.0149 | - |
| 1.4860 | 2700 | 0.002 | - |
| 1.5135 | 2750 | 0.009 | - |
| 1.5410 | 2800 | 0.0066 | - |
| 1.5685 | 2850 | 0.0173 | - |
| 1.5960 | 2900 | 0.0052 | - |
| 1.6236 | 2950 | 0.0039 | - |
| 1.6511 | 3000 | 0.0042 | - |
| 1.6786 | 3050 | 0.0339 | - |
| 1.7061 | 3100 | 0.001 | - |
| 1.7336 | 3150 | 0.0005 | - |
| 1.7611 | 3200 | 0.0049 | - |
| 1.7887 | 3250 | 0.01 | - |
| 1.8162 | 3300 | 0.0815 | - |
| 1.8437 | 3350 | 0.0227 | - |
| 1.8712 | 3400 | 0.005 | - |
| 1.8987 | 3450 | 0.0053 | - |
| 1.9263 | 3500 | 0.0152 | - |
| 1.9538 | 3550 | 0.0155 | - |
| 1.9813 | 3600 | 0.0182 | - |
| **2.0** | **3634** | **-** | **0.2266** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
aumy/a2c-PandaReachDense-v3
|
aumy
| 2024-01-05T17:03:57Z
| 1
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-05T14:24:22Z
|
---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.24 +/- 0.11
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Sararodriguezabou/gradio-test
|
Sararodriguezabou
| 2024-01-05T17:01:57Z
| 0
| 0
|
fastai
|
[
"fastai",
"region:us"
] | null | 2024-01-02T18:45:37Z
|
---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
TheBloke/Beyonder-4x7B-v2-GGUF
|
TheBloke
| 2024-01-05T16:59:41Z
| 226
| 38
|
transformers
|
[
"transformers",
"gguf",
"mixtral",
"moe",
"base_model:mlabonne/Beyonder-4x7B-v2",
"base_model:quantized:mlabonne/Beyonder-4x7B-v2",
"license:apache-2.0",
"region:us"
] | null | 2024-01-05T16:50:26Z
|
---
base_model: mlabonne/Beyonder-4x7B-v2
inference: false
license: apache-2.0
model_creator: Maxime Labonne
model_name: Beyonder 4X7B v2
model_type: mixtral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- moe
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Beyonder 4X7B v2 - GGUF
- Model creator: [Maxime Labonne](https://huggingface.co/mlabonne)
- Original model: [Beyonder 4X7B v2](https://huggingface.co/mlabonne/Beyonder-4x7B-v2)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Maxime Labonne's Beyonder 4X7B v2](https://huggingface.co/mlabonne/Beyonder-4x7B-v2).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Beyonder-4x7B-v2-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Beyonder-4x7B-v2-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Beyonder-4x7B-v2-GGUF)
* [Maxime Labonne's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mlabonne/Beyonder-4x7B-v2)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [beyonder-4x7b-v2.Q2_K.gguf](https://huggingface.co/TheBloke/Beyonder-4x7B-v2-GGUF/blob/main/beyonder-4x7b-v2.Q2_K.gguf) | Q2_K | 2 | 8.06 GB| 10.56 GB | smallest, significant quality loss - not recommended for most purposes |
| [beyonder-4x7b-v2.Q3_K_M.gguf](https://huggingface.co/TheBloke/Beyonder-4x7B-v2-GGUF/blob/main/beyonder-4x7b-v2.Q3_K_M.gguf) | Q3_K_M | 3 | 10.52 GB| 13.02 GB | very small, high quality loss |
| [beyonder-4x7b-v2.Q4_0.gguf](https://huggingface.co/TheBloke/Beyonder-4x7B-v2-GGUF/blob/main/beyonder-4x7b-v2.Q4_0.gguf) | Q4_0 | 4 | 13.62 GB| 16.12 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [beyonder-4x7b-v2.Q4_K_M.gguf](https://huggingface.co/TheBloke/Beyonder-4x7B-v2-GGUF/blob/main/beyonder-4x7b-v2.Q4_K_M.gguf) | Q4_K_M | 4 | 13.64 GB| 16.14 GB | medium, balanced quality - recommended |
| [beyonder-4x7b-v2.Q5_0.gguf](https://huggingface.co/TheBloke/Beyonder-4x7B-v2-GGUF/blob/main/beyonder-4x7b-v2.Q5_0.gguf) | Q5_0 | 5 | 16.63 GB| 19.13 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [beyonder-4x7b-v2.Q5_K_M.gguf](https://huggingface.co/TheBloke/Beyonder-4x7B-v2-GGUF/blob/main/beyonder-4x7b-v2.Q5_K_M.gguf) | Q5_K_M | 5 | 16.64 GB| 19.14 GB | large, very low quality loss - recommended |
| [beyonder-4x7b-v2.Q6_K.gguf](https://huggingface.co/TheBloke/Beyonder-4x7B-v2-GGUF/blob/main/beyonder-4x7b-v2.Q6_K.gguf) | Q6_K | 6 | 19.82 GB| 22.32 GB | very large, extremely low quality loss |
| [beyonder-4x7b-v2.Q8_0.gguf](https://huggingface.co/TheBloke/Beyonder-4x7B-v2-GGUF/blob/main/beyonder-4x7b-v2.Q8_0.gguf) | Q8_0 | 8 | 25.67 GB| 28.17 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Beyonder-4x7B-v2-GGUF and below it, a specific filename to download, such as: beyonder-4x7b-v2.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Beyonder-4x7B-v2-GGUF beyonder-4x7b-v2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Beyonder-4x7B-v2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Beyonder-4x7B-v2-GGUF beyonder-4x7b-v2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m beyonder-4x7b-v2.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./beyonder-4x7b-v2.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./beyonder-4x7b-v2.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Maxime Labonne's Beyonder 4X7B v2

# Beyonder-4x7B-v2
This model is a Mixture of Experts (MoE) made with [mergekit](https://github.com/cg123/mergekit) (mixtral branch). It uses the following base models:
* [openchat/openchat-3.5-1210](https://huggingface.co/openchat/openchat-3.5-1210)
* [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
* [maywell/PiVoT-0.1-Starling-LM-RP](https://huggingface.co/maywell/PiVoT-0.1-Starling-LM-RP)
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
## 🏆 Evaluation
Beyonder-4x7B-v2 is competitive with Mixtral-8x7B-Instruct-v0.1 on the Open LLM Leaderboard, while only having 4 experts instead of 8.

It also displays a significant improvement over the individual experts.

It also performs very well compared to other models on Nous benchmark suite. It's almost as good as the best Yi-34B fine-tune, which is a much bigger model: 24.2B parameters + only two experts are selected during inference (so ~12B) vs. 34B param.
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|--------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[**Beyonder-4x7B-v2**](https://huggingface.co/shadowml/Beyonder-4x7B-v2)| **45.29**| **75.95**| <u>**60.86**</u>| **46.4**| **57.13**|
|[NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)| 43.67| 73.24| 55.37| 41.76| 53.51|
|[OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B)| 42.75| 72.99| 52.99| 40.94| 52.42|
|[Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B)| 47.79| 74.69| 55.92| 44.84| 55.81|
|[Nous-Hermes-2-Yi-34B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B)| <u>50.27</u>| <u>76.00</u>| 60.34| <u>46.69</u>| <u>58.33</u>|
### AGIEval
| Task |Version| Metric |Value| |Stderr|
|------------------------------|------:|--------|----:|---|-----:|
|agieval_aqua_rat | 0|acc |23.62|± | 2.67|
| | |acc_norm|23.62|± | 2.67|
|agieval_logiqa_en | 0|acc |41.47|± | 1.93|
| | |acc_norm|43.01|± | 1.94|
|agieval_lsat_ar | 0|acc |23.04|± | 2.78|
| | |acc_norm|23.48|± | 2.80|
|agieval_lsat_lr | 0|acc |51.57|± | 2.22|
| | |acc_norm|52.94|± | 2.21|
|agieval_lsat_rc | 0|acc |64.31|± | 2.93|
| | |acc_norm|64.68|± | 2.92|
|agieval_sat_en | 0|acc |79.13|± | 2.84|
| | |acc_norm|79.13|± | 2.84|
|agieval_sat_en_without_passage| 0|acc |43.20|± | 3.46|
| | |acc_norm|43.20|± | 3.46|
|agieval_sat_math | 0|acc |34.55|± | 3.21|
| | |acc_norm|32.27|± | 3.16|
### GPT4All
| Task |Version| Metric |Value| |Stderr|
|-------------|------:|--------|----:|---|-----:|
|arc_challenge| 0|acc |61.86|± | 1.42|
| | |acc_norm|64.51|± | 1.40|
|arc_easy | 0|acc |85.06|± | 0.73|
| | |acc_norm|82.45|± | 0.78|
|boolq | 1|acc |88.35|± | 0.56|
|hellaswag | 0|acc |68.04|± | 0.47|
| | |acc_norm|85.12|± | 0.36|
|openbookqa | 0|acc |37.80|± | 2.17|
| | |acc_norm|48.60|± | 2.24|
|piqa | 0|acc |83.08|± | 0.87|
| | |acc_norm|83.95|± | 0.86|
|winogrande | 0|acc |78.69|± | 1.15|
### TruthfulQA
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |44.55|± | 1.74|
| | |mc2 |60.86|± | 1.57|
### Bigbench
| Task |Version| Metric |Value| |Stderr|
|------------------------------------------------|------:|---------------------|----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|58.95|± | 3.58|
|bigbench_date_understanding | 0|multiple_choice_grade|66.40|± | 2.46|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|48.84|± | 3.12|
|bigbench_geometric_shapes | 0|multiple_choice_grade|22.56|± | 2.21|
| | |exact_str_match |13.37|± | 1.80|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|30.40|± | 2.06|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|20.57|± | 1.53|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|52.00|± | 2.89|
|bigbench_movie_recommendation | 0|multiple_choice_grade|44.40|± | 2.22|
|bigbench_navigate | 0|multiple_choice_grade|52.10|± | 1.58|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|69.75|± | 1.03|
|bigbench_ruin_names | 0|multiple_choice_grade|55.36|± | 2.35|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|23.65|± | 1.35|
|bigbench_snarks | 0|multiple_choice_grade|77.35|± | 3.12|
|bigbench_sports_understanding | 0|multiple_choice_grade|73.02|± | 1.41|
|bigbench_temporal_sequences | 0|multiple_choice_grade|46.80|± | 1.58|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|22.08|± | 1.17|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|19.03|± | 0.94|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|52.00|± | 2.89|
## 🧩 Configuration
```yaml
base_model: mlabonne/Marcoro14-7B-slerp
experts:
- source_model: openchat/openchat-3.5-1210
positive_prompts:
- "chat"
- "assistant"
- "tell me"
- "explain"
- source_model: beowolx/CodeNinja-1.0-OpenChat-7B
positive_prompts:
- "code"
- "python"
- "javascript"
- "programming"
- "algorithm"
- source_model: maywell/PiVoT-0.1-Starling-LM-RP
positive_prompts:
- "storywriting"
- "write"
- "scene"
- "story"
- "character"
- source_model: WizardLM/WizardMath-7B-V1.1
positive_prompts:
- "reason"
- "math"
- "mathematics"
- "solve"
- "count"
```
## 💻 Usage
```python
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/Beyonder-4x7B-v2"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
<!-- original-model-card end -->
|
LoneStriker/Rosa_v2_7B-3.0bpw-h6-exl2
|
LoneStriker
| 2024-01-05T16:58:22Z
| 7
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-05T16:57:02Z
|
---
license: other
language:
- en
---
This is my daily driver, made by finetuning my v1 model with several custom datasets and then merging them together. It's smart enough, it can do prose, it can do roleplaying.
Overall, it's worth a try if you're a 7B enthusiast. Just don't expect it to be better than the sum of it's parts.
|
marquesafonso/bertimbau-large-ner-total
|
marquesafonso
| 2024-01-05T16:57:16Z
| 79
| 1
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"pt",
"arxiv:1909.10649",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-05T14:16:42Z
|
---
license: mit
language:
- pt
---
# bertimbau-large-ner-total
This model card aims to simplify the use of the [portuguese Bert, a.k.a, Bertimbau](https://github.com/neuralmind-ai/portuguese-bert) for the Named Entity Recognition task.
For this model card the we used the <mark style="background-color: grey"> BERT-CRF (total scenario, 10 classes) </mark> model available in the [ner_evaluation](https://github.com/neuralmind-ai/portuguese-bert/tree/master/ner_evaluation) folder of the original Bertimbau repo.
Available classes are:
+ PESSOA
+ ORGANIZACAO
+ LOCAL
+ TEMPO
+ VALOR
+ ABSTRACCAO
+ ACONTECIMENTO
+ COISA
+ OBRA
+ OUTRO
## Usage
```
# Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("marquesafonso/bertimbau-large-ner-total")
model = AutoModelForTokenClassification.from_pretrained("marquesafonso/bertimbau-large-ner-total")
```
## Example
```
from transformers import pipeline
pipe = pipeline("ner", model="marquesafonso/bertimbau-large-ner-total", aggregation_strategy='simple')
sentence = "James Marsh, realizador de filmes como A Teoria de Tudo ou Homem no Arame, assumiu a missão de criar uma obra biográfica sobre Samue Beckett, figura ímpar da literatura e da dramaturgia do século XX. O guião foi escrito pelo escocês Neil Forsyth, vencedor de dois Baftas."
result = pipe([sentence])
print(f"{sentence}\n{result}")
# James Marsh, realizador de filmes como A Teoria de Tudo ou Homem no Arame, assumiu a missão de criar uma obra biográfica sobre Samue Beckett, figura ímpar da literatura e da dramaturgia do século XX. O guião foi escrito pelo escocês Neil Forsyth, vencedor de dois Baftas.
# [[
# {'entity_group': 'PESSOA', 'score': 0.99737316, 'word': 'James Marsh', 'start': 0, 'end': 11},
# {'entity_group': 'OBRA', 'score': 0.9823761, 'word': 'A Teoria de Tudo', 'start': 39, 'end': 55},
# {'entity_group': 'OBRA', 'score': 0.96812135, 'word': 'Homem no Arame', 'start': 59, 'end': 73},
# {'entity_group': 'PESSOA', 'score': 0.9954967, 'word': 'Samue Beckett', 'start': 127, 'end': 140},
# {'entity_group': 'TEMPO', 'score': 0.97845674, 'word': 'século XX', 'start': 189, 'end': 198},
# {'entity_group': 'PESSOA', 'score': 0.9962597, 'word': 'Neil Forsyth', 'start': 233, 'end': 245},
# {'entity_group': 'OUTRO', 'score': 0.7552187, 'word': 'Baftas', 'start': 264, 'end': 270}
# ]]
```
## Acknowledgements
This work is an adaptation of [portuguese Bert, a.k.a, Bertimbau](https://github.com/neuralmind-ai/portuguese-bert). You may check and/or cite their [work](http://arxiv.org/abs/1909.10649):
```
@InProceedings{souza2020bertimbau,
author="Souza, F{\'a}bio and Nogueira, Rodrigo and Lotufo, Roberto",
editor="Cerri, Ricardo and Prati, Ronaldo C.",
title="BERTimbau: Pretrained BERT Models for Brazilian Portuguese",
booktitle="Intelligent Systems",
year="2020",
publisher="Springer International Publishing",
address="Cham",
pages="403--417",
isbn="978-3-030-61377-8"
}
@article{souza2019portuguese,
title={Portuguese Named Entity Recognition using BERT-CRF},
author={Souza, F{\'a}bio and Nogueira, Rodrigo and Lotufo, Roberto},
journal={arXiv preprint arXiv:1909.10649},
url={http://arxiv.org/abs/1909.10649},
year={2019}
}
```
Note that the authors - Fabio Capuano de Souza, Rodrigo Nogueira, Roberto de Alencar Lotufo - have used an MIT LICENSE for their work.
|
anhdt-dsai-02/ViT5_base_2048_with_sum
|
anhdt-dsai-02
| 2024-01-05T16:55:01Z
| 91
| 0
|
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-05T11:15:40Z
|
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: ViT5_base_2048_with_sum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT5_base_2048_with_sum
This model is a fine-tuned version of [VietAI/vit5-base](https://huggingface.co/VietAI/vit5-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
LoneStriker/Mistral-7B-Instruct-v0.2-code-ft-6.0bpw-h6-exl2
|
LoneStriker
| 2024-01-05T16:44:27Z
| 9
| 1
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-05T16:42:00Z
|
---
license: cc-by-nc-nd-4.0
---
# Mistral-7B-Instruct-v0.2-code-ft
I'm thrilled to introduce the latest iteration of our model, Mistral-7B-Instruct-v0.2-code-ft. This updated version is designed to further enhance coding assistance and co-pilot functionalities. We're eager for developers and enthusiasts to try it out and provide feedback!
## Additional Information
This version builds upon the previous Mistral-7B models, incorporating new datasets and features for a more refined experience.
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Eval Plus Performance
For detailed performance metrics, visit Eval Plus page: [Mistral-7B-Instruct-v0.2-code-ft Eval Plus](https://github.com/evalplus/evalplus)
Score: 0.421

## Dataset:
The model has been trained on a new dataset to improve its performance and versatility:
- path: ajibawa-2023/Code-74k-ShareGPT
type: sharegpt
conversation: chatml
Find more about the dataset here: [Code-74k-ShareGPT Dataset](https://huggingface.co/datasets/ajibawa-2023/Code-74k-ShareGPT)
## Model Architecture
- Base Model: mistralai/Mistral-7B-Instruct-v0.2
- Tokenizer Type: LlamaTokenizer
- Model Type: MistralForCausalLM
- Is Mistral Derived Model: true
- Sequence Length: 16384 with sample packing
## Enhanced Features
- Adapter: qlora
- Learning Rate: 0.0002 with cosine lr scheduler
- Optimizer: adamw_bnb_8bit
- Training Enhancements: bf16 training, gradient checkpointing, and flash attention
## Download Information
You can download and explore this model through these links on Hugging Face.
## Contributions and Feedback
We welcome contributions and feedback from the community. Please feel free to open issues or pull requests on repository.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
LoneStriker/Mistral-7B-Instruct-v0.2-code-ft-5.0bpw-h6-exl2
|
LoneStriker
| 2024-01-05T16:41:57Z
| 7
| 0
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-05T16:39:40Z
|
---
license: cc-by-nc-nd-4.0
---
# Mistral-7B-Instruct-v0.2-code-ft
I'm thrilled to introduce the latest iteration of our model, Mistral-7B-Instruct-v0.2-code-ft. This updated version is designed to further enhance coding assistance and co-pilot functionalities. We're eager for developers and enthusiasts to try it out and provide feedback!
## Additional Information
This version builds upon the previous Mistral-7B models, incorporating new datasets and features for a more refined experience.
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Eval Plus Performance
For detailed performance metrics, visit Eval Plus page: [Mistral-7B-Instruct-v0.2-code-ft Eval Plus](https://github.com/evalplus/evalplus)
Score: 0.421

## Dataset:
The model has been trained on a new dataset to improve its performance and versatility:
- path: ajibawa-2023/Code-74k-ShareGPT
type: sharegpt
conversation: chatml
Find more about the dataset here: [Code-74k-ShareGPT Dataset](https://huggingface.co/datasets/ajibawa-2023/Code-74k-ShareGPT)
## Model Architecture
- Base Model: mistralai/Mistral-7B-Instruct-v0.2
- Tokenizer Type: LlamaTokenizer
- Model Type: MistralForCausalLM
- Is Mistral Derived Model: true
- Sequence Length: 16384 with sample packing
## Enhanced Features
- Adapter: qlora
- Learning Rate: 0.0002 with cosine lr scheduler
- Optimizer: adamw_bnb_8bit
- Training Enhancements: bf16 training, gradient checkpointing, and flash attention
## Download Information
You can download and explore this model through these links on Hugging Face.
## Contributions and Feedback
We welcome contributions and feedback from the community. Please feel free to open issues or pull requests on repository.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
3una/finetuned-AffectNet
|
3una
| 2024-01-05T16:41:40Z
| 54
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"beit",
"image-classification",
"generated_from_trainer",
"base_model:microsoft/beit-base-patch16-224-pt22k-ft22k",
"base_model:finetune:microsoft/beit-base-patch16-224-pt22k-ft22k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-12-31T20:45:06Z
|
---
license: apache-2.0
base_model: microsoft/beit-base-patch16-224-pt22k-ft22k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuned-AffectNet
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-AffectNet
This model is a fine-tuned version of [microsoft/beit-base-patch16-224-pt22k-ft22k](https://huggingface.co/microsoft/beit-base-patch16-224-pt22k-ft22k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8122
- Accuracy: 0.7345
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 2.0686 | 1.0 | 163 | 2.0963 | 0.1549 |
| 1.7148 | 2.0 | 327 | 1.7250 | 0.2943 |
| 1.4591 | 3.0 | 490 | 1.4418 | 0.4204 |
| 1.3351 | 4.0 | 654 | 1.2648 | 0.5194 |
| 1.1343 | 5.0 | 817 | 1.0728 | 0.5908 |
| 1.1022 | 6.0 | 981 | 0.9741 | 0.6355 |
| 1.0476 | 7.0 | 1144 | 0.9203 | 0.6631 |
| 1.0049 | 8.0 | 1308 | 0.8769 | 0.6760 |
| 0.9561 | 9.0 | 1471 | 0.8438 | 0.6966 |
| 0.9409 | 10.0 | 1635 | 0.8283 | 0.6988 |
| 0.9419 | 11.0 | 1798 | 0.7867 | 0.7164 |
| 0.89 | 12.0 | 1962 | 0.7858 | 0.7139 |
| 0.8761 | 13.0 | 2125 | 0.7704 | 0.7147 |
| 0.8662 | 14.0 | 2289 | 0.7590 | 0.7225 |
| 0.8561 | 15.0 | 2452 | 0.7574 | 0.7199 |
| 0.8234 | 16.0 | 2616 | 0.7457 | 0.7238 |
| 0.844 | 17.0 | 2779 | 0.7416 | 0.7255 |
| 0.7908 | 18.0 | 2943 | 0.7485 | 0.7255 |
| 0.809 | 19.0 | 3106 | 0.7428 | 0.7250 |
| 0.7976 | 20.0 | 3270 | 0.7597 | 0.7203 |
| 0.7691 | 21.0 | 3433 | 0.7333 | 0.7345 |
| 0.7408 | 22.0 | 3597 | 0.7362 | 0.7246 |
| 0.7516 | 23.0 | 3760 | 0.7301 | 0.7298 |
| 0.7887 | 24.0 | 3924 | 0.7263 | 0.7332 |
| 0.7475 | 25.0 | 4087 | 0.7301 | 0.7293 |
| 0.7619 | 26.0 | 4251 | 0.7334 | 0.7298 |
| 0.7509 | 27.0 | 4414 | 0.7332 | 0.7345 |
| 0.7212 | 28.0 | 4578 | 0.7301 | 0.7367 |
| 0.7053 | 29.0 | 4741 | 0.7293 | 0.7328 |
| 0.6634 | 30.0 | 4905 | 0.7412 | 0.7298 |
| 0.677 | 31.0 | 5068 | 0.7221 | 0.7375 |
| 0.6453 | 32.0 | 5232 | 0.7281 | 0.7392 |
| 0.6961 | 33.0 | 5395 | 0.7280 | 0.7392 |
| 0.7135 | 34.0 | 5559 | 0.7348 | 0.7362 |
| 0.6871 | 35.0 | 5722 | 0.7334 | 0.7293 |
| 0.6829 | 36.0 | 5886 | 0.7281 | 0.7328 |
| 0.6742 | 37.0 | 6049 | 0.7332 | 0.7354 |
| 0.6167 | 38.0 | 6213 | 0.7274 | 0.7384 |
| 0.665 | 39.0 | 6376 | 0.7322 | 0.7311 |
| 0.6433 | 40.0 | 6540 | 0.7473 | 0.7345 |
| 0.6661 | 41.0 | 6703 | 0.7358 | 0.7341 |
| 0.6424 | 42.0 | 6867 | 0.7413 | 0.7324 |
| 0.6369 | 43.0 | 7030 | 0.7314 | 0.7414 |
| 0.611 | 44.0 | 7194 | 0.7325 | 0.7388 |
| 0.6556 | 45.0 | 7357 | 0.7485 | 0.7354 |
| 0.6524 | 46.0 | 7521 | 0.7434 | 0.7418 |
| 0.6176 | 47.0 | 7684 | 0.7402 | 0.7410 |
| 0.6142 | 48.0 | 7848 | 0.7480 | 0.7315 |
| 0.5968 | 49.0 | 8011 | 0.7457 | 0.7384 |
| 0.6132 | 50.0 | 8175 | 0.7514 | 0.7328 |
| 0.592 | 51.0 | 8338 | 0.7500 | 0.7375 |
| 0.6347 | 52.0 | 8502 | 0.7533 | 0.7345 |
| 0.5976 | 53.0 | 8665 | 0.7539 | 0.7324 |
| 0.5496 | 54.0 | 8829 | 0.7495 | 0.7388 |
| 0.5845 | 55.0 | 8992 | 0.7550 | 0.7367 |
| 0.5624 | 56.0 | 9156 | 0.7606 | 0.7362 |
| 0.5582 | 57.0 | 9319 | 0.7598 | 0.7341 |
| 0.6206 | 58.0 | 9483 | 0.7608 | 0.7345 |
| 0.5647 | 59.0 | 9646 | 0.7578 | 0.7388 |
| 0.6093 | 60.0 | 9810 | 0.7646 | 0.7358 |
| 0.5625 | 61.0 | 9973 | 0.7622 | 0.7388 |
| 0.6114 | 62.0 | 10137 | 0.7702 | 0.7324 |
| 0.5304 | 63.0 | 10300 | 0.7710 | 0.7367 |
| 0.5646 | 64.0 | 10464 | 0.7807 | 0.7298 |
| 0.5774 | 65.0 | 10627 | 0.7793 | 0.7328 |
| 0.5825 | 66.0 | 10791 | 0.7786 | 0.7375 |
| 0.5111 | 67.0 | 10954 | 0.7742 | 0.7380 |
| 0.5849 | 68.0 | 11118 | 0.7779 | 0.7349 |
| 0.5454 | 69.0 | 11281 | 0.7795 | 0.7367 |
| 0.5158 | 70.0 | 11445 | 0.7806 | 0.7345 |
| 0.5576 | 71.0 | 11608 | 0.7903 | 0.7345 |
| 0.5394 | 72.0 | 11772 | 0.7812 | 0.7380 |
| 0.5099 | 73.0 | 11935 | 0.7808 | 0.7354 |
| 0.5209 | 74.0 | 12099 | 0.7851 | 0.7319 |
| 0.5322 | 75.0 | 12262 | 0.7908 | 0.7401 |
| 0.5351 | 76.0 | 12426 | 0.7960 | 0.7306 |
| 0.5272 | 77.0 | 12589 | 0.7924 | 0.7324 |
| 0.477 | 78.0 | 12753 | 0.7981 | 0.7332 |
| 0.5186 | 79.0 | 12916 | 0.7942 | 0.7341 |
| 0.5366 | 80.0 | 13080 | 0.8016 | 0.7367 |
| 0.4809 | 81.0 | 13243 | 0.8014 | 0.7341 |
| 0.4889 | 82.0 | 13407 | 0.8008 | 0.7354 |
| 0.5287 | 83.0 | 13570 | 0.8010 | 0.7349 |
| 0.4926 | 84.0 | 13734 | 0.8047 | 0.7371 |
| 0.4989 | 85.0 | 13897 | 0.8046 | 0.7384 |
| 0.5483 | 86.0 | 14061 | 0.8022 | 0.7371 |
| 0.5157 | 87.0 | 14224 | 0.8055 | 0.7358 |
| 0.4999 | 88.0 | 14388 | 0.8071 | 0.7319 |
| 0.519 | 89.0 | 14551 | 0.8083 | 0.7362 |
| 0.4534 | 90.0 | 14715 | 0.8082 | 0.7384 |
| 0.429 | 91.0 | 14878 | 0.8103 | 0.7354 |
| 0.5073 | 92.0 | 15042 | 0.8116 | 0.7336 |
| 0.5358 | 93.0 | 15205 | 0.8106 | 0.7341 |
| 0.5049 | 94.0 | 15369 | 0.8111 | 0.7315 |
| 0.4745 | 95.0 | 15532 | 0.8118 | 0.7336 |
| 0.5052 | 96.0 | 15696 | 0.8104 | 0.7371 |
| 0.495 | 97.0 | 15859 | 0.8101 | 0.7354 |
| 0.4752 | 98.0 | 16023 | 0.8117 | 0.7349 |
| 0.4927 | 99.0 | 16186 | 0.8120 | 0.7336 |
| 0.4875 | 99.69 | 16300 | 0.8122 | 0.7345 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Binarystar105/distilbert-base-uncased-finetuned-cola
|
Binarystar105
| 2024-01-05T16:29:38Z
| 44
| 0
|
transformers
|
[
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-05T16:26:32Z
|
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Binarystar105/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Binarystar105/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1898
- Validation Loss: 0.5491
- Train Matthews Correlation: 0.5347
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5182 | 0.4631 | 0.4729 | 0 |
| 0.3241 | 0.4697 | 0.5227 | 1 |
| 0.1898 | 0.5491 | 0.5347 | 2 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
armhebb/lora_license-id_style-name-4
|
armhebb
| 2024-01-05T16:28:12Z
| 4
| 0
|
diffusers
|
[
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"region:us"
] |
text-to-image
| 2024-01-05T14:31:34Z
|
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - armhebb/lora_license-id_style-name-4
These are LoRA adaption weights for resleeve_base. The weights were fine-tuned on the None dataset. You can find some example images in the following.

LoRA for the text encoder was enabled: False.
Special VAE used for training: None.
|
la-min/falcon-7b-qlora-health-faq
|
la-min
| 2024-01-05T16:26:26Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"region:us"
] | null | 2024-01-05T13:57:35Z
|
---
library_name: peft
base_model: tiiuae/falcon-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
yvelos/llama-2-7b-int4-python-code-20k
|
yvelos
| 2024-01-05T16:24:37Z
| 1
| 0
|
peft
|
[
"peft",
"region:us"
] | null | 2024-01-05T16:24:22Z
|
---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
AzureBlack/WinterGoddess-1.4x-70B-L2-3bpw-6h-exl2
|
AzureBlack
| 2024-01-05T16:19:50Z
| 10
| 1
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-05T16:13:02Z
|
---
license: cc-by-nc-4.0
language:
- en
---
ExllamaV2 version of the model created by [Sao10K](https://huggingface.co/Sao10K)!
Original Model https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2
Requires ExllamaV2, which is being developed by turboderp https://github.com/turboderp/exllamav2 under an MIT license.
-----------------
Winter Goddess - A 70B L2 Model for General use, or for Roleplay.
I wanted a Smart Model that is Capable of following Instructions, while being able to (e)RP effectively. Sort of like 1.3, but better.
I merged some models as a base, and had tuned on top of it afterwards.
I personally think this mogs Euryale 1.3, but ymmv.
***
For Transparency's Sake:
Models Used:
<br> Platypus2-70B-instruct
<br> Lila-70B
<br> SunsetBoulevard (at roughly 0.1 weight, boosting coherency)
<br> Private De-alignment LoRA on top.
why does it show mergekit in the safetensors.index metadata? -> I used DARE method to merge the 3 models. Then Axolotl qLoRA. then used lora-merge, copying the files of the base merged model because they didn't save to the new one, only the .safetensor files got saved.
***
Prompt Format - Alpaca
```
### Instruction:
<Prompt>
### Response:
```
OR
```
### Instruction:
<Prompt>
### Input:
<Insert Context Here>
### Response:
```
***
<br> 42. A 25-year-old female has been struck in the right eye with a pipe. She has a ruptured right globe, an orbital fracture and no other obvious injury. You should bandage:
<br> A) The right eye tightly
<br> B) Both eyes loosely
<br> C) The right eye loosely
<br> D) Both eyes tightly
|
mus-shd/Reinforce-CartPole-v1
|
mus-shd
| 2024-01-05T16:18:55Z
| 0
| 0
| null |
[
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-05T16:18:38Z
|
---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
DhruvLunawath/Qrcode_generator
|
DhruvLunawath
| 2024-01-05T16:15:41Z
| 0
| 0
| null |
[
"Qr code ",
"text-to-image",
"en",
"region:us"
] |
text-to-image
| 2024-01-05T16:09:16Z
|
---
language:
- en
pipeline_tag: text-to-image
tags:
- 'Qr code '
---
|
nlee282/moai-dpo-1.0
|
nlee282
| 2024-01-05T15:59:05Z
| 1
| 0
|
peft
|
[
"peft",
"tensorboard",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"dataset:unalignment/toxic-dpo-v0.1",
"base_model:unsloth/zephyr-sft-bnb-4bit",
"base_model:adapter:unsloth/zephyr-sft-bnb-4bit",
"region:us"
] | null | 2024-01-05T02:01:53Z
|
---
datasets:
- unalignment/toxic-dpo-v0.1
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: unsloth/zephyr-sft-bnb-4bit
model-index:
- name: outputs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
🗿
# outputs
This model is a fine-tuned version of [unsloth/zephyr-sft-bnb-4bit](https://huggingface.co/unsloth/zephyr-sft-bnb-4bit) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
krabhi/ppo-SnowballTarget
|
krabhi
| 2024-01-05T15:54:36Z
| 0
| 0
|
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2024-01-05T14:48:08Z
|
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: krabhi/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
anjith672/snsStyle
|
anjith672
| 2024-01-05T15:53:10Z
| 1
| 1
|
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] |
text-to-image
| 2024-01-05T14:23:52Z
|
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: sns style
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
kieranbm/dqn-SpaceInvadersNoFrameskip-v4
|
kieranbm
| 2024-01-05T15:49:48Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-05T15:49:15Z
|
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 574.00 +/- 117.96
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kieranbm -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga kieranbm -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga kieranbm
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
oshizo/japanese-e5-mistral-7b_slerp
|
oshizo
| 2024-01-05T15:48:24Z
| 70
| 7
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"feature-extraction",
"sentence-similarity",
"ja",
"license:mit",
"text-generation-inference",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-01-04T12:33:19Z
|
---
license: mit
language:
- ja
pipeline_tag: sentence-similarity
---
This model was created by merging [intfloat/e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) and [stabilityai/japanese-stablelm-base-gamma-7b](https://huggingface.co/stabilityai/japanese-stablelm-base-gamma-7b).
See [intfloat/e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct) page or [evaluation notebook of oshizo/JapaneseEmbeddingEval](https://github.com/oshizo/JapaneseEmbeddingEval/blob/main/21_oshizo_japanese-e5-mistral-7b_slerp.ipynb) for model usage.
The steps to merge are as follows.
1. Load intfloat/e5-mistral-7b-instruct as a "MistralForCausalLM" class and save_pretrained as is.
Because e5-mistral-7b-instruct is made with the "MistralModel" class, it could not be merged with "MistraForCausalLM" as is.
In my environment, I had to load into the CPU, not the GPU, or I would get an error.
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "intfloat/e5-mistral-7b-instruct"
model = AutoModelForCausalLM.from_pretrained(model_id)#, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_id)
model.save_pretrained("./e5-mistral-7b-instruct_with_lm_head")
```
2. Merge using [mergekit](https://github.com/cg123/mergekit) with the following yaml configuration
merge_config.yaml
```
models:
- model: stabilityai/japanese-stablelm-base-gamma-7b
- model: ./e5-mistral-7b-instruct_with_lm_head
base_model: stabilityai/japanese-stablelm-base-gamma-7b
parameters:
t:
- value: [0.5, 0.9]
merge_method: slerp
dtype: float16
```
I tried the "linear", "slerp", and "task_arithmetic" merging methods, and this setting seemed to be the best.
The choice of "t" parameters was set to use more japanese-stablelm-base-gamma-7b for the layer closer to the input to take advantage of Japanese word understanding,
and more e5-mistral-7b-instruct for the layer closer to the output to generate good embeddings.
As for the "ties" method, I could not find any parameters for density and weight that worked properly.
3. Copy settings related to pad_token from the e5-mistral-7b-instruct repository.
* config.json
* tokenizer.json
* tokenizer.model
* tokenizer_config.json
* special_tokens_map.json
|
chamdentimem/pegasus-samsum
|
chamdentimem
| 2024-01-05T15:46:03Z
| 6
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"base_model:google/pegasus-cnn_dailymail",
"base_model:finetune:google/pegasus-cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-05T15:44:17Z
|
---
base_model: google/pegasus-cnn_dailymail
tags:
- generated_from_trainer
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6599 | 0.54 | 500 | 1.4833 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
LoneStriker/Norobara-ZLoss-8x7B-5.0bpw-h6-exl2
|
LoneStriker
| 2024-01-05T15:35:27Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"dataset:LDJnr/Capybara",
"dataset:unalignment/toxic-dpo-v0.1",
"dataset:LDJnr/Verified-Camel",
"dataset:HuggingFaceH4/no_robots",
"dataset:Doctor-Shotgun/no-robots-sharegpt",
"dataset:Doctor-Shotgun/capybara-sharegpt",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-05T15:20:44Z
|
---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- mixtral
datasets:
- LDJnr/Capybara
- unalignment/toxic-dpo-v0.1
- LDJnr/Verified-Camel
- HuggingFaceH4/no_robots
- Doctor-Shotgun/no-robots-sharegpt
- Doctor-Shotgun/capybara-sharegpt
---
# Norobara-ZLoss-8x7B
This is an experimental instruct-tuned [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)-based model trained using [Charles Goddard](https://huggingface.co/chargoddard)'s ZLoss and Megablocks-based fork of transformers.
It primarily uses the Capybara and No Robots datasets (thus the name). The goal was to create an uncensored general instruction following model, as well as test various loss implementations while we figure out how the heck to train Mixtral properly.
[Exl2 Quants](https://huggingface.co/royallab/Norobara-ZLoss-8x7B-exl2)
## Usage:
The intended prompt format is a modified multi-turn Alpaca instruction format:
```
### Instruction:
{system prompt}
### Input:
{user message}
### Response:
{model response}
### Input:
{user message}
### Response:
{model response}
(etc.)
```
## Bias, Risks, and Limitations
The model will show biases present in the base model. No ethical alignment was applied to prevent the generation of toxic or harmful outputs (in fact the opposite, with examples from toxic-DPO included), so generate at your own risk.
## Training Details
This model was trained as a QLora adapter for 3 epochs using a single H100 GPU for around 13 hours.
|
quinten-datalab/AliBERT-7GB
|
quinten-datalab
| 2024-01-05T15:33:56Z
| 33
| 3
|
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"Biomedical",
"Medical",
"French-Biomedical",
"fr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-04T14:54:08Z
|
---
license: mit
language:
- fr
library_name: transformers
tags:
- Biomedical
- Medical
- French-Biomedical
Mask token:
- [MASK]
widget:
- text: "A l’admission, l’examen clinique mettait en évidence : - une hypotension artérielle avec une pression [MASK] à 6 mmHg."
example_title: "Example 1"
- text: "Le patient a été diagnostiqué avec une [MASK] lobaire aiguë et a été traité avec des antibiotiques appropriés"
example_title: "Example 2"
- text: "En mars 2001, le malade fut opéré, mais vu le caractère hémorragique de la tumeur, une simple biopsie surrénalienne a été réalisée ayant montré l’aspect de [MASK] malin non Hodgkinien de haut grade de malignité."
example_title: "Example 3"
- text: "La cytologie urinaire n’a mis en évidence que des cellules [MASK] normales et l’examen cyto-bactériologique des urines était stérile."
example_title: "Example 4"
- text: "La prise de greffe a été systématiquement réalisée au niveau de la face interne de la [MASK] afin de limiter la plaie cicatricielle."
example_title: "Example 5"
---
# quinten-datalab/AliBERT-7GB: AliBERT: is a pre-trained language model for French biomedical text.
# Introduction
AliBERT: is a pre-trained language model for French biomedical text. It is trained with masked language model like RoBERTa.
Here are the main contributions of our work:
<ul>
<li>
A French biomedical language model, a language-specific and domain-specific PLM, which can be used to represent French biomedical text for different downstream tasks.
</li>
<li>
A normalization of a Unigram sub-word tokenization of French biomedical textual input which improves our vocabulary and overall performance of the models trained.
</li>
<li>
It is a foundation model that achieved state-of-the-art results on French biomedical text.
</li>
</ul>
The Paper can be found here: https://aclanthology.org/2023.bionlp-1.19/
# Data
The pre-training corpus was gathered from different sub-corpora. It is composed of 7GB French biomedical textual documents. The corpora were collected from different sources. Scientific articles are collected from ScienceDirect using an API provided on subscription and where French articles in biomedical domain were selected. The summaries of thesis manuscripts are collected from "Système universitaire de documentation (SuDoc)" which is a catalog of universities documentation system. Short texts and some complete sentences were collected from the public drug database which lists the characteristics of tens of thousands of drugs. Furthermore, a similar drug database known as "Résumé des Caractéristiques du Produit (RCP)" is also used to represent a description of medications that are intended to be utilized by biomedicine professionals.
# How to use alibert-quinten/Oncology-NER with HuggingFace
Load quinten-datalab/AliBERT-7GB fill-mask model and the tokenizer used to train AliBERT:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification,pipeline
tokenizer = AutoTokenizer.from_pretrained("quinten-datalab/AliBERT-7GB")
model = AutoModelForTokenMaskedLM.from_pretrained("quinten-datalab/AliBERT-7GB")
fill_mask=pipeline("fill-mask",model=model,tokenizer=tokenizer)
nlp_AliBERT=fill_mask("La prise de greffe a été systématiquement réalisée au niveau de la face interne de la [MASK] afin de limiter la plaie cicatricielle.")
[{'score': 0.7724128365516663,
'token': 6749,
'token_str': 'cuisse',
'sequence': 'La prise de greffe a été systématiquement réalisée au niveau de la face interne de la cuisse afin de limiter la plaie cicatricielle.'},
{'score': 0.09472355246543884,
'token': 4915,
'token_str': 'jambe',
'sequence': 'La prise de greffe a été systématiquement réalisée au niveau de la face interne de la jambe afin de limiter la plaie cicatricielle.'},
{'score': 0.03340734913945198,
'token': 2050,
'token_str': 'main',
'sequence': 'La prise de greffe a été systématiquement réalisée au niveau de la face interne de la main afin de limiter la plaie cicatricielle.'},
{'score': 0.030924487859010696,
'token': 844,
'token_str': 'face',
'sequence': 'La prise de greffe a été systématiquement réalisée au niveau de la face interne de la face afin de limiter la plaie cicatricielle.'},
{'score': 0.012518334202468395,
'token': 3448,
'token_str': 'joue',
'sequence': 'La prise de greffe a été systématiquement réalisée au niveau de la face interne de la joue afin de limiter la plaie cicatricielle.'}]
```
# Metrics and results
The model has been evaluted in the following downstream tasks
## Biomedical Named Entity Recognition (NER)
The model is evaluated on two (CAS and QUAERO) publically available Frech biomedical text.
#### CAS dataset
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg .tg-baqh{text-align:center;vertical-align:top}
.tg .tg-0lax{text-align:center;vertical-align:top}
</style>
<table class="tg">
<thead>
<tr>
<th>Models</th>
<th class="tg-0lax" colspan="3">CamemBERT</th>
<th class="tg-0lax" colspan="3">AliBERT</th>
<th class="tg-0lax" colspan="3">DrBERT</th>
</tr>
</thead>
<tbody>
<tr>
<td>Entities</td><td>P<br></td><td>R</td><td>F1</td><td>P<br></td><td>R</td><td>F1</td><td>P<br></td><td>R</td><td>F1</td>
</tr>
<tr>
<td>Substance</td><td>0.96</td><td>0.87</td><td>0.91</td><td>0.96</td><td>0.91</td><td>0.93</td><td>0.83</td><td>0.83</td><td>0.82</td>
</tr>
<tr>
<td>Symptom</td> <td>0.89</td> <td>0.91</td> <td>0.90</td> <td>0.96</td> <td>0.98</td> <td>0.97</td> <td>0.93</td> <td>0.90</td> <td>0.91</td>
</tr>
<tr>
<td>Anatomy</td> <td>0.94</td> <td>0.91</td> <td>0.88</td> <td>0.97</td> <td>0.97</td> <td>0.98</td> <td>0.92</td> <td>0.93</td> <td>0.93</td>
</tr>
<tr>
<td>Value</td> <td>0.88</td> <td>0.46</td> <td>0.60</td> <td>0.98</td> <td>0.99</td> <td>0.98</td> <td>0.91</td> <td>0.91</td> <td>0.91</td>
</tr>
<tr>
<td> Pathology</td> <td>0.79</td> <td>0.70</td> <td>0.74</td> <td>0.81</td> <td>0.39</td> <td>0.52</td> <td>0.85 <td>0.57</td> <td>0.68</td>
</tr>
<tr>
<td>Macro Avg</td> <td>0.89 </td> <td>0.79</td> <td>0.81</td> <td> 0.94</td> <td>0.85</td> <td>0.88</td> <td> 0.92</td> <td> 0.87</td> <td>0.89</td>
</tr>
</tbody>
</table>
Table 1: NER performances on CAS dataset
#### QUAERO dataset
<table class="tg">
<thead>
<tr>
<th>Models</th>
<th class="tg-0lax" colspan="3">CamemBERT</th>
<th class="tg-0lax" colspan="3">AliBERT</th>
<th class="tg-0lax" colspan="3">DrBERT</th>
</tr>
</thead>
<tbody>
<tr>
<td>Entity </td> <td> P </td> <td> R </td> <td> F1 </td> <td> P </td> <td> R </td> <td> F1 </td> <td> P </td> <td> R </td> <td> F1 </td>
</tr>
<tr>
<td>Anatomy </td> <td> 0.649 </td> <td> 0.641 </td> <td> 0.645 </td> <td> 0.795 </td> <td> 0.811 </td> <td> 0.803 </td> <td> 0.736 </td> <td> 0.844 </td> <td> 0.824 </td>
</tr>
<tr>
<td>Chemical </td> <td> 0.844 </td> <td> 0.847 </td> <td> 0.846 </td> <td> 0.878 </td> <td> 0.893 </td> <td> 0.885 </td> <td> 0.505 </td> <td> 0.823 </td> <td> 0.777 </td>
</tr>
<tr>
<td>Device </td> <td> 0.000 </td> <td> 0.000 </td> <td> 0.000 </td> <td> 0.506 </td> <td> 0.356 </td> <td> 0.418 </td> <td> 0.939 </td> <td> 0.237 </td> <td> 0.419 </td>
</tr>
<tr>
<td>Disorder </td> <td> 0.772 </td> <td> 0.818 </td> <td> 0.794 </td> <td> 0.857 </td> <td> 0.843 </td> <td> 0.850 </td> <td> 0.883 </td> <td> 0.809 </td> <td> 0.845 </td>
</tr>
<tr>
<td>Procedure </td> <td> 0.880 </td> <td> 0.894 </td> <td> 0.887 </td> <td> 0.969 </td> <td> 0.967 </td> <td> 0.968 </td> <td> 0.944 </td> <td> 0.976 </td> <td> 0.960 </td>
</tr>
<tr>
<td>Macro Avg </td> <td> 0.655 </td> <td> 0.656 </td> <td> 0.655 </td> <td> 0.807 </td> <td> 0.783 </td> <td> 0.793 </td> <td> 0.818 </td> <td> 0.755 </td> <td> 0.782 </td>
</tr>
</tbody>
</table>
Table 2: NER performances on QUAERO dataset
##AliBERT: A Pre-trained Language Model for French Biomedical Text
|
Vigneshwari-Sambandan/vit-base-patch16-224-finetuned-fibre
|
Vigneshwari-Sambandan
| 2024-01-05T15:29:55Z
| 7
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224",
"base_model:finetune:google/vit-base-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-01-04T09:10:37Z
|
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned-fibre
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5179971204607263
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-fibre
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5532
- Accuracy: 0.5180
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.6045 | 1.0 | 879 | 1.6613 | 0.4918 |
| 1.5847 | 2.0 | 1758 | 1.5962 | 0.5065 |
| 1.4774 | 3.0 | 2637 | 1.5532 | 0.5180 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
LoneStriker/sqlcoder-34b-alpha-8.0bpw-h8-exl2
|
LoneStriker
| 2024-01-05T15:27:20Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-05T15:09:21Z
|
---
license: cc-by-4.0
language:
- en
pipeline_tag: text-generation
---
# Defog SQLCoder
**Updated on Nov 14 to reflect benchmarks for SQLCoder-34B**
Defog's SQLCoder is a state-of-the-art LLM for converting natural language questions to SQL queries.
[Interactive Demo](https://defog.ai/sqlcoder-demo/) | [🤗 HF Repo](https://huggingface.co/defog/sqlcoder-34b-alpha) | [♾️ Colab](https://colab.research.google.com/drive/1z4rmOEiFkxkMiecAWeTUlPl0OmKgfEu7?usp=sharing) | [🐦 Twitter](https://twitter.com/defogdata)
## TL;DR
SQLCoder-34B is a 34B parameter model that outperforms `gpt-4` and `gpt-4-turbo` for natural language to SQL generation tasks on our [sql-eval](https://github.com/defog-ai/sql-eval) framework, and significantly outperforms all popular open-source models.
SQLCoder-34B is fine-tuned on a base CodeLlama model.
## Results on novel datasets not seen in training
| model | perc_correct |
|-|-|
| defog-sqlcoder-34b | 84.0 |
| gpt4-turbo-2023-11-09 | 82.5 |
| gpt4-2023-11-09 | 82.5 |
| defog-sqlcoder2 | 77.5 |
| gpt4-2023-08-28 | 74.0 |
| defog-sqlcoder-7b | 71.0 |
| gpt-3.5-2023-10-04 | 66.0 |
| claude-2 | 64.5 |
| gpt-3.5-2023-08-28 | 61.0 |
| claude_instant_1 | 61.0 |
| text-davinci-003 | 52.5 |

## License
The code in this repo (what little there is of it) is Apache-2 licensed. The model weights have a `CC BY-SA 4.0` license. The TL;DR is that you can use and modify the model for any purpose – including commercial use. However, if you modify the weights (for example, by fine-tuning), you must open-source your modified weights under the same license terms.
## Training
Defog was trained on more than 20,000 human-curated questions. These questions were based on 10 different schemas. None of the schemas in the training data were included in our evaluation framework.
You can read more about our [training approach](https://defog.ai/blog/open-sourcing-sqlcoder2-7b/) and [evaluation framework](https://defog.ai/blog/open-sourcing-sqleval/).
## Results by question category
We classified each generated question into one of 5 categories. The table displays the percentage of questions answered correctly by each model, broken down by category.
| | date | group_by | order_by | ratio | join | where |
| -------------- | ---- | -------- | -------- | ----- | ---- | ----- |
| sqlcoder-34b | 80 | 94.3 | 88.6 | 74.3 | 82.9 | 82.9 |
| gpt-4 | 68 | 94.3 | 85.7 | 77.1 | 85.7 | 80 |
| sqlcoder2-15b | 76 | 80 | 77.1 | 60 | 77.1 | 77.1 |
| sqlcoder-7b | 64 | 82.9 | 74.3 | 54.3 | 74.3 | 74.3 |
| gpt-3.5 | 68 | 77.1 | 68.6 | 37.1 | 71.4 | 74.3 |
| claude-2 | 52 | 71.4 | 74.3 | 57.1 | 65.7 | 62.9 |
| claude-instant | 48 | 71.4 | 74.3 | 45.7 | 62.9 | 60 |
| gpt-3 | 32 | 71.4 | 68.6 | 25.7 | 57.1 | 54.3 |
<img width="831" alt="image" src="https://github.com/defog-ai/sqlcoder/assets/5008293/79c5bdc8-373c-4abd-822e-e2c2569ed353">
## Using SQLCoder
You can use SQLCoder via the `transformers` library by downloading our model weights from the Hugging Face repo. We have added sample code for [inference](./inference.py) on a [sample database schema](./metadata.sql).
```bash
python inference.py -q "Question about the sample database goes here"
# Sample question:
# Do we get more revenue from customers in New York compared to customers in San Francisco? Give me the total revenue for each city, and the difference between the two.
```
You can also use a demo on our website [here](https://defog.ai/sqlcoder-demo)
## Hardware Requirements
SQLCoder-34B has been tested on a 4xA10 GPU with `float16` weights. You can also load an 8-bit and 4-bit quantized version of the model on consumer GPUs with 20GB or more of memory – like RTX 4090, RTX 3090, and Apple M2 Pro, M2 Max, or M2 Ultra Chips with 20GB or more of memory.
## Todo
- [x] Open-source the v1 model weights
- [x] Train the model on more data, with higher data variance
- [ ] Tune the model further with Reward Modelling and RLHF
- [ ] Pretrain a model from scratch that specializes in SQL analysis
|
Abhinav28/whisper-large-v3-hindi-100steps
|
Abhinav28
| 2024-01-05T15:26:19Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai/whisper-large-v3",
"base_model:adapter:openai/whisper-large-v3",
"region:us"
] | null | 2024-01-03T18:55:08Z
|
---
library_name: peft
base_model: openai/whisper-large-v3
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
SaiedAlshahrani/arywiki_20230101_roberta_mlm_nobots
|
SaiedAlshahrani
| 2024-01-05T15:20:20Z
| 20
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"ar",
"dataset:SaiedAlshahrani/Moroccan_Arabic_Wikipedia_20230101_nobots",
"dataset:SaiedAlshahrani/MASD",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-08-26T19:10:36Z
|
---
tags:
- generated_from_trainer
model-index:
- name: aryRoBERTa
results: []
metrics:
- perplexity
license: mit
datasets:
- SaiedAlshahrani/Moroccan_Arabic_Wikipedia_20230101_nobots
- SaiedAlshahrani/MASD
language:
- ar
library_name: transformers
pipeline_tag: fill-mask
widget:
- text: الهدف من الحياة هو <mask>
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Moroccan Arabic Wikipedia (aryRoBERTa<sub>BASE</sub>)
This aryRoBERTa<sub>BASE</sub> model has been trained *from scratch* on the Moroccan Arabic Wikipedia articles (**after removing the bot-generated articles**), downloaded on the 1st of January 2023, processed using
`Gensim` Python library, preprocessed using `tr` Linux/Unix utility and `CAMeLTools` Python toolkit for Arabic NLP, and hosted here at [SaiedAlshahrani/Moroccan\_Arabic\_Wikipedia\_20230101\_nobots](https://huggingface.co/datasets/SaiedAlshahrani/Moroccan_Arabic_Wikipedia_20230101_nobots).
It achieves the following results on the evaluation set:
- Pseudo-Perplexity: 5,686.44
## Model description
We trained this Moroccan Arabic Wikipedia Masked Language Model (aryRoBERTa<sub>BASE</sub>) to evaluate its performance using the Fill-Mask evaluation task and the Masked Arab States Dataset ([MASD](https://huggingface.co/datasets/SaiedAlshahrani/MASD)) dataset and measure the *impact* of **bot-based generation** on the Moroccan Arabic Wikipedia edition.
For more details about the experiment, please **read** and **cite** our paper:
```bash
@inproceedings{alshahrani-etal-2023-performance,
title = "{Performance Implications of Using Unrepresentative Corpora in {A}rabic Natural Language Processing}",
author = "Alshahrani, Saied and Alshahrani, Norah and Dey, Soumyabrata and Matthews, Jeanna",
booktitle = "Proceedings of the The First Arabic Natural Language Processing Conference (ArabicNLP 2023)",
month = December,
year = "2023",
address = "Singapore (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.arabicnlp-1.19",
doi = "10.18653/v1/2023.arabicnlp-1.19",
pages = "218--231",
abstract = "Wikipedia articles are a widely used source of training data for Natural Language Processing (NLP) research, particularly as corpora for low-resource languages like Arabic. However, it is essential to understand the extent to which these corpora reflect the representative contributions of native speakers, especially when many entries in a given language are directly translated from other languages or automatically generated through automated mechanisms. In this paper, we study the performance implications of using inorganic corpora that are not representative of native speakers and are generated through automated techniques such as bot generation or automated template-based translation. The case of the Arabic Wikipedia editions gives a unique case study of this since the Moroccan Arabic Wikipedia edition (ARY) is small but representative, the Egyptian Arabic Wikipedia edition (ARZ) is large but unrepresentative, and the Modern Standard Arabic Wikipedia edition (AR) is both large and more representative. We intrinsically evaluate the performance of two main NLP upstream tasks, namely word representation and language modeling, using word analogy evaluations and fill-mask evaluations using our two newly created datasets: Arab States Analogy Dataset (ASAD) and Masked Arab States Dataset (MASD). We demonstrate that for good NLP performance, we need both large and organic corpora; neither alone is sufficient. We show that producing large corpora through automated means can be a counter-productive, producing models that both perform worse and lack cultural richness and meaningful representation of the Arabic language and its native speakers.",
}
```
## Intended uses & limitations
We do **not** recommend using this model because it was trained *only* on the Moroccan Arabic Wikipedia articles (**after removing the bot-generated articles**), <u>unless</u> you fine-tune the model on a large, organic, and representative Moroccan Arabic dataset.
## Training and evaluation data
We have trained this model on the Moroccan Arabic Wikipedia articles without bot-generated articles ([SaiedAlshahrani/Moroccan\_Arabic\_Wikipedia\_20230101\_nobots](https://huggingface.co/datasets/SaiedAlshahrani/Moroccan_Arabic_Wikipedia_20230101_nobots)) without using any validation or evaluation data (only training data) due to a lack of computational power.
## Training procedure
We have trained this model using the Paperspace GPU-Cloud service. We used a machine with 8 CPUs, 45GB RAM, and A6000 GPU with 48GB RAM.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 128
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Epoch | Step | Training Loss |
|:-----:|:-----:|:-------------:|
| 1 | 35 | 9.561500 |
| 2 | 70 | 7.946000 |
| 3 | 105 | 7.420400 |
| 4 | 140 | 7.197800 |
| 5 | 175 | 7.174400 |
| Train Runtime | Train Samples Per Second | Train Steps Per Second | Total Flos | Train Loss | Epoch |
|:--------------:|:------------------------:|:----------------------:|:-------------------------:|:----------:|:--------:|
| 192.684800 | 121.260000 | 0.960000 | 774708261150720.000000 | 7.812142 | 5.000000 |
### Evaluation results
This aryRoBERTa<sub>BASE</sub> model has been evaluated on the Masked Arab States Dataset ([SaiedAlshahrani/MASD](https://huggingface.co/datasets/SaiedAlshahrani/MASD)).
| K=10 | K=50 | K=100 |
|:----:|:-----:|:----:|
| 0.00%| 0.00% | 0.62% |
### Framework versions
- Datasets 2.9.0
- Tokenizers 0.12.1
- Transformers 4.24.0
- Pytorch 1.12.1+cu116
|
SaiedAlshahrani/arwiki_20230101_roberta_mlm_nobots
|
SaiedAlshahrani
| 2024-01-05T15:19:23Z
| 8
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"ar",
"dataset:SaiedAlshahrani/Arabic_Wikipedia_20230101_nobots",
"dataset:SaiedAlshahrani/MASD",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2023-08-27T09:32:55Z
|
---
tags:
- generated_from_trainer
model-index:
- name: arRoBERTa
results: []
metrics:
- perplexity
license: mit
datasets:
- SaiedAlshahrani/Arabic_Wikipedia_20230101_nobots
- SaiedAlshahrani/MASD
language:
- ar
library_name: transformers
pipeline_tag: fill-mask
widget:
- text: الهدف من الحياة هو <mask>
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Arabic Wikipedia (arRoBERTa<sub>BASE</sub>)
This arRoBERTa<sub>BASE</sub> model has been trained *from scratch* on the Arabic Wikipedia articles (**after removing the bot-generated articles**), downloaded on the 1st of January 2023, processed using
`Gensim` Python library, preprocessed using `tr` Linux/Unix utility and `CAMeLTools` Python toolkit for Arabic NLP, and hosted here at [SaiedAlshahrani/Arabic\_Wikipedia\_20230101\_nobots](https://huggingface.co/datasets/SaiedAlshahrani/Arabic_Wikipedia_20230101_nobots).
It achieves the following results on the evaluation set:
- Pseudo-Perplexity: 20.41
## Model description
We trained this Arabic Wikipedia Masked Language Model (arRoBERTa<sub>BASE</sub>) to evaluate its performance using the Fill-Mask evaluation task and the Masked Arab States Dataset ([MASD](https://huggingface.co/datasets/SaiedAlshahrani/MASD)) dataset and measure the *impact* of **bot-based generation** on the Arabic Wikipedia edition.
For more details about the experiment, please **read** and **cite** our paper:
```bash
@inproceedings{alshahrani-etal-2023-performance,
title = "{Performance Implications of Using Unrepresentative Corpora in {A}rabic Natural Language Processing}",
author = "Alshahrani, Saied and Alshahrani, Norah and Dey, Soumyabrata and Matthews, Jeanna",
booktitle = "Proceedings of the The First Arabic Natural Language Processing Conference (ArabicNLP 2023)",
month = December,
year = "2023",
address = "Singapore (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.arabicnlp-1.19",
doi = "10.18653/v1/2023.arabicnlp-1.19",
pages = "218--231",
abstract = "Wikipedia articles are a widely used source of training data for Natural Language Processing (NLP) research, particularly as corpora for low-resource languages like Arabic. However, it is essential to understand the extent to which these corpora reflect the representative contributions of native speakers, especially when many entries in a given language are directly translated from other languages or automatically generated through automated mechanisms. In this paper, we study the performance implications of using inorganic corpora that are not representative of native speakers and are generated through automated techniques such as bot generation or automated template-based translation. The case of the Arabic Wikipedia editions gives a unique case study of this since the Moroccan Arabic Wikipedia edition (ARY) is small but representative, the Egyptian Arabic Wikipedia edition (ARZ) is large but unrepresentative, and the Modern Standard Arabic Wikipedia edition (AR) is both large and more representative. We intrinsically evaluate the performance of two main NLP upstream tasks, namely word representation and language modeling, using word analogy evaluations and fill-mask evaluations using our two newly created datasets: Arab States Analogy Dataset (ASAD) and Masked Arab States Dataset (MASD). We demonstrate that for good NLP performance, we need both large and organic corpora; neither alone is sufficient. We show that producing large corpora through automated means can be a counter-productive, producing models that both perform worse and lack cultural richness and meaningful representation of the Arabic language and its native speakers.",
}
```
## Intended uses & limitations
We do **not** recommend using this model because it was trained *only* on the Arabic Wikipedia articles (**after removing the bot-generated articles**), <u>unless</u> you fine-tune the model on a large, organic, and representative Arabic dataset.
## Training and evaluation data
We have trained this model on the Arabic Wikipedia articles without bot-generated articles ([SaiedAlshahrani/Arabic\_Wikipedia\_20230101\_nobots](https://huggingface.co/datasets/SaiedAlshahrani/Arabic_Wikipedia_20230101_nobots)) without using any validation or evaluation data (only training data) due to a lack of computational power.
## Training procedure
We have trained this model using the Paperspace GPU-Cloud service. We used a machine with 8 CPUs, 45GB RAM, and A6000 GPU with 48GB RAM.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 256
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-06
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Epoch | Step | Training Loss |
|:-----:|:-----:|:-------------:|
| 1 | 3000 | 5.681200 |
| 2 | 6000 | 3.777100 |
| 3 | 9000 | 3.246300 |
| 4 | 12000 | 3.012100 |
| 5 | 15000 | 2.888400 |
| Train Runtime | Train Samples Per Second | Train Steps Per Second | Total Flos | Train Loss | Epoch |
|:--------------:|:------------------------:|:----------------------:|:-------------------------:|:----------:|:--------:|
| 17048.756800 | 248.355000 | 0.970000 | 140390797515571200.000000 | 3.639375 | 5.000000 |
### Evaluation results
This arRoBERTa<sub>BASE</sub> model has been evaluated on the Masked Arab States Dataset ([SaiedAlshahrani/MASD](https://huggingface.co/datasets/SaiedAlshahrani/MASD)).
| K=10 | K=50 | K=100 |
|:----:|:-----:|:----:|
| 45.62%| 51.25% | 53.12% |
### Framework versions
- Datasets 2.9.0
- Tokenizers 0.12.1
- Transformers 4.24.0
- Pytorch 1.12.1+cu116
|
surprisedPikachu007/tomato-disease-detection
|
surprisedPikachu007
| 2024-01-05T15:14:05Z
| 35
| 0
|
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2023-03-09T04:55:35Z
|
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
base_model: google/vit-base-patch16-224-in21k
model-index:
- name: tomato-disease-detection
results:
- task:
type: image-classification
name: Image Classification
dataset:
name: imagefolder
type: imagefolder
config: dataset
split: train
args: dataset
metrics:
- type: accuracy
value: 0.9917706397663923
name: Accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tomato-disease-detection
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0394
- Accuracy: 0.9918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1363 | 1.0 | 941 | 0.1109 | 0.9774 |
| 0.0657 | 2.0 | 1882 | 0.0666 | 0.9841 |
| 0.0605 | 3.0 | 2823 | 0.0394 | 0.9918 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
|
TheBloke/Mixtral_11Bx2_MoE_19B-AWQ
|
TheBloke
| 2024-01-05T15:11:40Z
| 16
| 5
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"base_model:cloudyu/Mixtral_11Bx2_MoE_19B",
"base_model:quantized:cloudyu/Mixtral_11Bx2_MoE_19B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-01-05T14:47:38Z
|
---
base_model: cloudyu/Mixtral_11Bx2_MoE_19B
inference: false
license: cc-by-nc-4.0
model_creator: hai
model_name: Mixtral 11Bx2 MoE 19B
model_type: mixtral
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Mixtral 11Bx2 MoE 19B - AWQ
- Model creator: [hai](https://huggingface.co/cloudyu)
- Original model: [Mixtral 11Bx2 MoE 19B](https://huggingface.co/cloudyu/Mixtral_11Bx2_MoE_19B)
<!-- description start -->
## Description
This repo contains AWQ model files for [hai's Mixtral 11Bx2 MoE 19B](https://huggingface.co/cloudyu/Mixtral_11Bx2_MoE_19B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
**MIXTRAL AWQ**
This is a Mixtral AWQ model.
For AutoAWQ inference, please install AutoAWQ 0.1.8 or later.
Support via Transformers is also available, but currently requires installing Transformers from Github: `pip3 install git+https://github.com/huggingface/transformers.git`
vLLM: version 0.2.6 is confirmed to support Mixtral AWQs.
TGI: I tested version 1.3.3 and it loaded the model fine, but I was not able to get any output back. Further testing/debug is required. (Let me know if you get it working!)
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
AWQ models are supported by (note that not all of these may support Mixtral models yet - see above):
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Mixtral_11Bx2_MoE_19B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mixtral_11Bx2_MoE_19B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mixtral_11Bx2_MoE_19B-GGUF)
* [hai's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/cloudyu/Mixtral_11Bx2_MoE_19B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Mixtral_11Bx2_MoE_19B-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 10.36 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Mixtral_11Bx2_MoE_19B-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Mixtral_11Bx2_MoE_19B-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/Mixtral_11Bx2_MoE_19B-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''{prompt}
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Mixtral_11Bx2_MoE_19B-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Mixtral_11Bx2_MoE_19B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/Mixtral_11Bx2_MoE_19B-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''{prompt}
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: hai's Mixtral 11Bx2 MoE 19B
# Mixtral MOE 2x10.7B
MoE of the following models :
* [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct)
* [jeonsworld/CarbonVillain-en-10.7B-v1](https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v1)
* Local Test
* hf (pretrained=cloudyu/Mixtral_11Bx2_MoE_19B), gen_kwargs: (None), limit: None, num_fewshot: 10, batch_size: auto (32)
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|---------|-------|------|-----:|--------|-----:|---|-----:|
|hellaswag|Yaml |none | 10|acc |0.7142|± |0.0045|
| | |none | 10|acc_norm|0.8819|± |0.0032|
gpu code example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Mixtral_11Bx2_MoE_19B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='auto',local_files_only=False, load_in_4bit=True
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
CPU example
```
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
import math
## v2 models
model_path = "cloudyu/Mixtral_11Bx2_MoE_19B"
tokenizer = AutoTokenizer.from_pretrained(model_path, use_default_system_prompt=False)
model = AutoModelForCausalLM.from_pretrained(
model_path, torch_dtype=torch.float32, device_map='cpu',local_files_only=False
)
print(model)
prompt = input("please input prompt:")
while len(prompt) > 0:
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
generation_output = model.generate(
input_ids=input_ids, max_new_tokens=500,repetition_penalty=1.2
)
print(tokenizer.decode(generation_output[0]))
prompt = input("please input prompt:")
```
|
Lipas007/bert-finetuned-ner
|
Lipas007
| 2024-01-05T15:11:14Z
| 45
| 0
|
transformers
|
[
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-05T09:48:30Z
|
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_keras_callback
model-index:
- name: Lipas007/bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Lipas007/bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0466
- Validation Loss: 0.0566
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2634, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1790 | 0.0647 | 0 |
| 0.0466 | 0.0566 | 1 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
torch-uncertainty/resnet18_c10
|
torch-uncertainty
| 2024-01-05T15:09:44Z
| 0
| 0
| null |
[
"vision",
"classification",
"uncertainty",
"dataset:cifar-10",
"license:apache-2.0",
"region:us"
] | null | 2023-12-22T11:27:02Z
|
---
license: apache-2.0
tags:
- vision
- classification
- uncertainty
datasets:
- cifar-10
---
# Standard ResNet trained on CIFAR-10
## How to use
Download [TorchUncertainty](https://torch-uncertainty.github.io/) - [GitHub](https://github.com/ENSTA-U2IS/torch-uncertainty) to use this model.
## License
These weights are provided under the Apache 2.0 license.
|
sourceoftruthdata/sot_autotrain_dreambooth_v1
|
sourceoftruthdata
| 2024-01-05T15:02:38Z
| 9
| 1
|
diffusers
|
[
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:finetune:stabilityai/stable-diffusion-2-1",
"region:us"
] |
text-to-image
| 2023-08-23T23:04:02Z
|
---
base_model: stabilityai/stable-diffusion-2-1
instance_prompt: a hand drawn painting in the style of picasso with geometric shapes
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
This is the model that feeds the Google Colab Notebook.
The model is a simplified version of the DreamBooth model.
Type in a prompt, get an image.
The user is responsible for utilizing this application.
|
Annukh/Aristides070
|
Annukh
| 2024-01-05T14:51:42Z
| 0
| 0
| null |
[
"region:us"
] | null | 2024-01-05T14:47:42Z
|
Trigger word : aristides 070 guitar
Examples
purple aristides 070 guitar with a maple neck
green aristides 070 guitar with a black neck
|
LoneStriker/Norobara-ZLoss-8x7B-3.75bpw-h6-exl2
|
LoneStriker
| 2024-01-05T14:50:10Z
| 7
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"dataset:LDJnr/Capybara",
"dataset:unalignment/toxic-dpo-v0.1",
"dataset:LDJnr/Verified-Camel",
"dataset:HuggingFaceH4/no_robots",
"dataset:Doctor-Shotgun/no-robots-sharegpt",
"dataset:Doctor-Shotgun/capybara-sharegpt",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-05T14:36:18Z
|
---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- mixtral
datasets:
- LDJnr/Capybara
- unalignment/toxic-dpo-v0.1
- LDJnr/Verified-Camel
- HuggingFaceH4/no_robots
- Doctor-Shotgun/no-robots-sharegpt
- Doctor-Shotgun/capybara-sharegpt
---
# Norobara-ZLoss-8x7B
This is an experimental instruct-tuned [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)-based model trained using [Charles Goddard](https://huggingface.co/chargoddard)'s ZLoss and Megablocks-based fork of transformers.
It primarily uses the Capybara and No Robots datasets (thus the name). The goal was to create an uncensored general instruction following model, as well as test various loss implementations while we figure out how the heck to train Mixtral properly.
[Exl2 Quants](https://huggingface.co/royallab/Norobara-ZLoss-8x7B-exl2)
## Usage:
The intended prompt format is a modified multi-turn Alpaca instruction format:
```
### Instruction:
{system prompt}
### Input:
{user message}
### Response:
{model response}
### Input:
{user message}
### Response:
{model response}
(etc.)
```
## Bias, Risks, and Limitations
The model will show biases present in the base model. No ethical alignment was applied to prevent the generation of toxic or harmful outputs (in fact the opposite, with examples from toxic-DPO included), so generate at your own risk.
## Training Details
This model was trained as a QLora adapter for 3 epochs using a single H100 GPU for around 13 hours.
|
marquesafonso/bertimbau-large-ner-selective
|
marquesafonso
| 2024-01-05T14:49:52Z
| 128
| 1
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"pt",
"arxiv:1909.10649",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-01-04T22:10:37Z
|
---
license: mit
language:
- pt
---
# bertimbau-large-ner-selective
This model card aims to simplify the use of the [portuguese Bert, a.k.a, Bertimbau](https://github.com/neuralmind-ai/portuguese-bert) for the Named Entity Recognition task.
For this model card the we used the <mark style="background-color: grey"> **BERT-CRF (selective scenario, 5 classes)** </mark> model available in the [ner_evaluation](https://github.com/neuralmind-ai/portuguese-bert/tree/master/ner_evaluation) folder of the original Bertimbau repo.
Available classes are:
+ PESSOA
+ ORGANIZACAO
+ LOCAL
+ TEMPO
+ VALOR
## Usage
```
# Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("marquesafonso/bertimbau-large-ner-selective")
model = AutoModelForTokenClassification.from_pretrained("marquesafonso/bertimbau-large-ner-selective")
```
## Example
```
from transformers import pipeline
pipe = pipeline("ner", model="marquesafonso/bertimbau-large-ner-selective", aggregation_strategy='simple')
sentence = "Acima de Ederson, abaixo de Rúben Dias. É entre os dois jogadores do Manchester City que se vai colocar Gonçalo Ramos no ranking de vendas mais avultadas do Benfica."
result = pipe([sentence])
print(f"{sentence}\n{result}")
# Acima de Ederson, abaixo de Rúben Dias. É entre os dois jogadores do Manchester City que se vai colocar Gonçalo Ramos no ranking de vendas mais avultadas do Benfica.
# [[
# {'entity_group': 'PESSOA', 'score': 0.99694395, 'word': 'Ederson', 'start': 9, 'end': 16},
# {'entity_group': 'PESSOA', 'score': 0.9918462, 'word': 'Rúben Dias', 'start': 28, 'end': 38},
# {'entity_group': 'ORGANIZACAO', 'score': 0.96376556, 'word': 'Manchester City', 'start': 69, 'end': 84},
# {'entity_group': 'PESSOA', 'score': 0.9993823, 'word': 'Gonçalo Ramos', 'start': 104, 'end': 117},
# {'entity_group': 'ORGANIZACAO', 'score': 0.9033079, 'word': 'Benfica', 'start': 157, 'end': 164}
# ]]
```
## Acknowledgements
This work is an adaptation of [portuguese Bert, a.k.a, Bertimbau](https://github.com/neuralmind-ai/portuguese-bert). You may check and/or cite their [work](http://arxiv.org/abs/1909.10649):
```
@InProceedings{souza2020bertimbau,
author="Souza, F{\'a}bio and Nogueira, Rodrigo and Lotufo, Roberto",
editor="Cerri, Ricardo and Prati, Ronaldo C.",
title="BERTimbau: Pretrained BERT Models for Brazilian Portuguese",
booktitle="Intelligent Systems",
year="2020",
publisher="Springer International Publishing",
address="Cham",
pages="403--417",
isbn="978-3-030-61377-8"
}
@article{souza2019portuguese,
title={Portuguese Named Entity Recognition using BERT-CRF},
author={Souza, F{\'a}bio and Nogueira, Rodrigo and Lotufo, Roberto},
journal={arXiv preprint arXiv:1909.10649},
url={http://arxiv.org/abs/1909.10649},
year={2019}
}
```
Note that the authors - Fabio Capuano de Souza, Rodrigo Nogueira, Roberto de Alencar Lotufo - have used an MIT LICENSE for their work.
|
TheBloke/Kunoichi-7B-AWQ
|
TheBloke
| 2024-01-05T14:36:28Z
| 16
| 8
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"base_model:SanjiWatsuki/Kunoichi-7B",
"base_model:quantized:SanjiWatsuki/Kunoichi-7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-01-05T14:19:31Z
|
---
base_model: SanjiWatsuki/Kunoichi-7B
inference: false
license: cc-by-nc-4.0
model_creator: Sanji Watsuki
model_name: Kunoichi 7B
model_type: mistral
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
tags:
- merge
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Kunoichi 7B - AWQ
- Model creator: [Sanji Watsuki](https://huggingface.co/SanjiWatsuki)
- Original model: [Kunoichi 7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B)
<!-- description start -->
## Description
This repo contains AWQ model files for [Sanji Watsuki's Kunoichi 7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Kunoichi-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Kunoichi-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Kunoichi-7B-GGUF)
* [Sanji Watsuki's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/SanjiWatsuki/Kunoichi-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Kunoichi-7B-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.15 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Kunoichi-7B-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Kunoichi-7B-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/Kunoichi-7B-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Kunoichi-7B-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Kunoichi-7B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/Kunoichi-7B-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Sanji Watsuki's Kunoichi 7B

<!-- description start -->
## Description
This repository hosts **Kunoichi-7B**, an general purpose model capable of RP. In both my testing and the benchmarks, Kunoichi is an extremely strong model, keeping the advantages of my previous models but gaining more intelligence. Kunoichi scores extremely well on [all benchmarks which correlate closely with ChatBot Arena Elo.](https://www.reddit.com/r/LocalLLaMA/comments/18u0tu3/benchmarking_the_benchmarks_correlation_with/)
| Model | MT Bench | EQ Bench | MMLU | Logic Test |
|----------------------|----------|----------|---------|-------------|
| GPT-4-Turbo | 9.32 | - | - | - |
| GPT-4 | 8.99 | 62.52 | 86.4 | 0.86 |
| **Kunoichi-7B** | **8.14** | **44.32** | **~64.7** | **0.58** |
| Starling-7B | 8.09 | - | 63.9 | 0.51 |
| Claude-2 | 8.06 | 52.14 | 78.5 | - |
| Silicon-Maid-7B | 7.96 | 40.44 | 64.7 | 0.54 |
| Loyal-Macaroni-Maid-7B | 7.95 | 38.66 | 64.9 | 0.57 |
| GPT-3.5-Turbo | 7.94 | 50.28 | 70 | 0.57 |
| Claude-1 | 7.9 | - | 77 | - |
| Openchat-3.5 | 7.81 | 37.08 | 64.3 | 0.39 |
| Dolphin-2.6-DPO | 7.74 | 42.88 | 61.9 | 0.53 |
| Zephyr-7B-beta | 7.34 | 38.71 | 61.4 | 0.30 |
| Llama-2-70b-chat-hf | 6.86 | 51.56 | 63 | - |
| Neural-chat-7b-v3-1 | 6.84 | 43.61 | 62.4 | 0.30 |
The model is intended to be used with up to an 8k context window. Using a NTK RoPE alpha of 2.6, the model can be used experimentally up to a 16k context window.
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Custom format, or Alpaca
### Alpaca:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
### SillyTavern format:
I found the best SillyTavern results from using the Noromaid template.
SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
Additionally, here is my highly recommended [Text Completion preset](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B/blob/main/Characters/MinP.json). You can tweak this by adjusting temperature up or dropping min p to boost creativity or raise min p to increase stability. You shouldn't need to touch anything else!
## WTF is Kunoichi-7B?
Kunoichi-7B is a SLERP merger between my previous RP model, Silicon-Maid-7B, and an unreleased model that I had dubbed "Ninja-7B". This model is the result of me attempting to merge an RP focused model which maintained the strengths of Silicon-Maid-7B but further increased the model's brain power. I sought to increase both MT-Bench and EQ-Bench without losing Silicon Maid's strong ability to follow SillyTavern character cards.
Ninja-7B was born from an attempt to turn [jan-hq/stealth-v1.2](https://huggingface.co/jan-hq/stealth-v1.2) into a viable model through mergers. Although none of the Ninja prototype models developed to a point where I was happy, it turned out to be a strong model to merge. Combined with Silicon-Maid-7B, this appeared to be a strong merger.
|
veronica-girolimetti/one-shot-colab-originalflant5
|
veronica-girolimetti
| 2024-01-05T14:25:14Z
| 5
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-05T10:51:00Z
|
---
license: apache-2.0
base_model: google/flan-t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: test-dialogue-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-dialogue-summarization
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3647
- Rouge1: 43.7125
- Rouge2: 20.8696
- Rougel: 20.4726
- Rougelsum: 20.4726
- Gen Len: 15.005
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 7
- eval_batch_size: 7
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 2.9658 | 1.0 | 186 | 2.6549 | 47.8375 | 19.6224 | 19.3309 | 19.3309 | 15.65 |
| 2.8292 | 2.0 | 372 | 2.5659 | 46.3424 | 19.6335 | 20.017 | 20.017 | 14.965 |
| 2.7598 | 3.0 | 558 | 2.5190 | 45.6451 | 19.7504 | 20.0363 | 20.0363 | 15.07 |
| 2.6531 | 4.0 | 744 | 2.4796 | 45.0646 | 19.649 | 19.6929 | 19.6929 | 14.9 |
| 2.5946 | 5.0 | 930 | 2.4526 | 44.0678 | 19.4902 | 20.0463 | 20.0463 | 15.01 |
| 2.5868 | 6.0 | 1116 | 2.4340 | 44.7027 | 19.7504 | 20.0391 | 20.0391 | 14.765 |
| 2.5896 | 7.0 | 1302 | 2.4179 | 44.5941 | 19.8653 | 20.0073 | 20.0073 | 14.745 |
| 2.5626 | 8.0 | 1488 | 2.3981 | 44.6259 | 19.9902 | 20.3022 | 20.3022 | 15.1 |
| 2.4633 | 9.0 | 1674 | 2.3921 | 44.6047 | 20.4376 | 20.3104 | 20.3104 | 14.97 |
| 2.5217 | 10.0 | 1860 | 2.3826 | 44.2188 | 19.9486 | 20.3353 | 20.3353 | 14.995 |
| 2.48 | 11.0 | 2046 | 2.3766 | 44.4635 | 20.6357 | 20.3618 | 20.3618 | 14.99 |
| 2.4502 | 12.0 | 2232 | 2.3723 | 44.0093 | 20.7614 | 20.3647 | 20.3647 | 14.995 |
| 2.4946 | 13.0 | 2418 | 2.3677 | 43.8165 | 20.947 | 20.4526 | 20.4526 | 15.035 |
| 2.4372 | 14.0 | 2604 | 2.3651 | 44.0221 | 20.9248 | 20.5763 | 20.5763 | 14.92 |
| 2.4606 | 15.0 | 2790 | 2.3647 | 43.7125 | 20.8696 | 20.4726 | 20.4726 | 15.005 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ostapeno/newt_adaNeo1B_aeslc_1_0_0_sbs0.5_svdemb_sgd_full_ft_coarsegrained
|
ostapeno
| 2024-01-05T14:25:06Z
| 0
| 0
| null |
[
"region:us"
] | null | 2024-01-04T19:52:04Z
|
Number of experts present in the library: 11
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| aeslc_1_0_0 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/aeslc_1_0_0 | lora |
| aeslc_1_0_0_v4 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/aeslc_1_0_0 | lora |
| aeslc_1_0_0_v1 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/aeslc_1_0_0 | lora |
| aeslc_1_0_0_v2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/aeslc_1_0_0 | lora |
| aeslc_1_0_0_v3 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/aeslc_1_0_0 | lora |
| aeslc_1_0_0_v6 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/aeslc_1_0_0 | lora |
| aeslc_1_0_0_v5 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/aeslc_1_0_0 | lora |
| aeslc_1_0_0_v7 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/aeslc_1_0_0 | lora |
| aeslc_1_0_0_v8 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/aeslc_1_0_0 | lora |
| aeslc_1_0_0_v9 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/aeslc_1_0_0 | lora |
| aeslc_1_0_0_v10 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/aeslc_1_0_0 | lora |
Last updated on: 2024-01-05 14:25:03+00:00
|
geektech/t5-large-lora-ce
|
geektech
| 2024-01-05T14:21:23Z
| 6
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google-t5/t5-large",
"base_model:adapter:google-t5/t5-large",
"region:us"
] | null | 2024-01-05T08:41:12Z
|
---
library_name: peft
base_model: t5-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
TheBloke/Kunoichi-7B-GGUF
|
TheBloke
| 2024-01-05T14:19:55Z
| 2,063
| 31
|
transformers
|
[
"transformers",
"gguf",
"mistral",
"merge",
"base_model:SanjiWatsuki/Kunoichi-7B",
"base_model:quantized:SanjiWatsuki/Kunoichi-7B",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-01-05T11:50:44Z
|
---
base_model: SanjiWatsuki/Kunoichi-7B
inference: false
license: cc-by-nc-4.0
model_creator: Sanji Watsuki
model_name: Kunoichi 7B
model_type: mistral
prompt_template: 'Below is an instruction that describes a task. Write a response
that appropriately completes the request.
### Instruction:
{prompt}
### Response:
'
quantized_by: TheBloke
tags:
- merge
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Kunoichi 7B - GGUF
- Model creator: [Sanji Watsuki](https://huggingface.co/SanjiWatsuki)
- Original model: [Kunoichi 7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [Sanji Watsuki's Kunoichi 7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Kunoichi-7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Kunoichi-7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Kunoichi-7B-GGUF)
* [Sanji Watsuki's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/SanjiWatsuki/Kunoichi-7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Alpaca
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [kunoichi-7b.Q2_K.gguf](https://huggingface.co/TheBloke/Kunoichi-7B-GGUF/blob/main/kunoichi-7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [kunoichi-7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Kunoichi-7B-GGUF/blob/main/kunoichi-7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.17 GB| 5.67 GB | very small, high quality loss |
| [kunoichi-7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Kunoichi-7B-GGUF/blob/main/kunoichi-7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [kunoichi-7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Kunoichi-7B-GGUF/blob/main/kunoichi-7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [kunoichi-7b.Q4_0.gguf](https://huggingface.co/TheBloke/Kunoichi-7B-GGUF/blob/main/kunoichi-7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [kunoichi-7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Kunoichi-7B-GGUF/blob/main/kunoichi-7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [kunoichi-7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Kunoichi-7B-GGUF/blob/main/kunoichi-7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [kunoichi-7b.Q5_0.gguf](https://huggingface.co/TheBloke/Kunoichi-7B-GGUF/blob/main/kunoichi-7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [kunoichi-7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Kunoichi-7B-GGUF/blob/main/kunoichi-7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [kunoichi-7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Kunoichi-7B-GGUF/blob/main/kunoichi-7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [kunoichi-7b.Q6_K.gguf](https://huggingface.co/TheBloke/Kunoichi-7B-GGUF/blob/main/kunoichi-7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [kunoichi-7b.Q8_0.gguf](https://huggingface.co/TheBloke/Kunoichi-7B-GGUF/blob/main/kunoichi-7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Kunoichi-7B-GGUF and below it, a specific filename to download, such as: kunoichi-7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Kunoichi-7B-GGUF kunoichi-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Kunoichi-7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Kunoichi-7B-GGUF kunoichi-7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m kunoichi-7b.Q4_K_M.gguf --color -c 8192 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 8192` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./kunoichi-7b.Q4_K_M.gguf", # Download the model file first
n_ctx=8192, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{prompt}\n\n### Response:", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./kunoichi-7b.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: Sanji Watsuki's Kunoichi 7B

<!-- description start -->
## Description
This repository hosts **Kunoichi-7B**, an general purpose model capable of RP. In both my testing and the benchmarks, Kunoichi is an extremely strong model, keeping the advantages of my previous models but gaining more intelligence. Kunoichi scores extremely well on [all benchmarks which correlate closely with ChatBot Arena Elo.](https://www.reddit.com/r/LocalLLaMA/comments/18u0tu3/benchmarking_the_benchmarks_correlation_with/)
| Model | MT Bench | EQ Bench | MMLU | Logic Test |
|----------------------|----------|----------|---------|-------------|
| GPT-4-Turbo | 9.32 | - | - | - |
| GPT-4 | 8.99 | 62.52 | 86.4 | 0.86 |
| **Kunoichi-7B** | **8.14** | **44.32** | **~64.7** | **0.58** |
| Starling-7B | 8.09 | - | 63.9 | 0.51 |
| Claude-2 | 8.06 | 52.14 | 78.5 | - |
| Silicon-Maid-7B | 7.96 | 40.44 | 64.7 | 0.54 |
| Loyal-Macaroni-Maid-7B | 7.95 | 38.66 | 64.9 | 0.57 |
| GPT-3.5-Turbo | 7.94 | 50.28 | 70 | 0.57 |
| Claude-1 | 7.9 | - | 77 | - |
| Openchat-3.5 | 7.81 | 37.08 | 64.3 | 0.39 |
| Dolphin-2.6-DPO | 7.74 | 42.88 | 61.9 | 0.53 |
| Zephyr-7B-beta | 7.34 | 38.71 | 61.4 | 0.30 |
| Llama-2-70b-chat-hf | 6.86 | 51.56 | 63 | - |
| Neural-chat-7b-v3-1 | 6.84 | 43.61 | 62.4 | 0.30 |
The model is intended to be used with up to an 8k context window. Using a NTK RoPE alpha of 2.6, the model can be used experimentally up to a 16k context window.
<!-- description end -->
<!-- prompt-template start -->
## Prompt template: Custom format, or Alpaca
### Alpaca:
```
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
```
### SillyTavern format:
I found the best SillyTavern results from using the Noromaid template.
SillyTavern config files: [Context](https://files.catbox.moe/ifmhai.json), [Instruct](https://files.catbox.moe/ttw1l9.json).
Additionally, here is my highly recommended [Text Completion preset](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B/blob/main/Characters/MinP.json). You can tweak this by adjusting temperature up or dropping min p to boost creativity or raise min p to increase stability. You shouldn't need to touch anything else!
## WTF is Kunoichi-7B?
Kunoichi-7B is a SLERP merger between my previous RP model, Silicon-Maid-7B, and an unreleased model that I had dubbed "Ninja-7B". This model is the result of me attempting to merge an RP focused model which maintained the strengths of Silicon-Maid-7B but further increased the model's brain power. I sought to increase both MT-Bench and EQ-Bench without losing Silicon Maid's strong ability to follow SillyTavern character cards.
Ninja-7B was born from an attempt to turn [jan-hq/stealth-v1.2](https://huggingface.co/jan-hq/stealth-v1.2) into a viable model through mergers. Although none of the Ninja prototype models developed to a point where I was happy, it turned out to be a strong model to merge. Combined with Silicon-Maid-7B, this appeared to be a strong merger.
<!-- original-model-card end -->
|
ntc-ai/SDXL-LoRA-slider.symmetrical
|
ntc-ai
| 2024-01-05T14:06:22Z
| 90
| 0
|
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-05T14:06:18Z
|
---
language:
- en
thumbnail: "images/evaluate/symmetrical.../symmetrical_17_3.0.png"
widget:
- text: symmetrical
output:
url: images/symmetrical_17_3.0.png
- text: symmetrical
output:
url: images/symmetrical_19_3.0.png
- text: symmetrical
output:
url: images/symmetrical_20_3.0.png
- text: symmetrical
output:
url: images/symmetrical_21_3.0.png
- text: symmetrical
output:
url: images/symmetrical_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "symmetrical"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - symmetrical (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/symmetrical_17_-3.0.png" width=256 height=256 /> | <img src="images/symmetrical_17_0.0.png" width=256 height=256 /> | <img src="images/symmetrical_17_3.0.png" width=256 height=256 /> |
| <img src="images/symmetrical_19_-3.0.png" width=256 height=256 /> | <img src="images/symmetrical_19_0.0.png" width=256 height=256 /> | <img src="images/symmetrical_19_3.0.png" width=256 height=256 /> |
| <img src="images/symmetrical_20_-3.0.png" width=256 height=256 /> | <img src="images/symmetrical_20_0.0.png" width=256 height=256 /> | <img src="images/symmetrical_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
symmetrical
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.symmetrical', weight_name='symmetrical.safetensors', adapter_name="symmetrical")
# Activate the LoRA
pipe.set_adapters(["symmetrical"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, symmetrical"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 880+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
madha98/Text_Generation_RNN
|
madha98
| 2024-01-05T13:49:10Z
| 0
| 2
|
keras
|
[
"keras",
"text-generation",
"en",
"dataset:madha98/Shakespeare",
"license:mit",
"region:us"
] |
text-generation
| 2024-01-05T13:39:59Z
|
---
license: mit
datasets:
- madha98/Shakespeare
library_name: keras
pipeline_tag: text-generation
language:
- en
---
# Automatic Text Generation Using SimpleRNN
## Overview
It contains code and resources for Automatic Text Generation. The goal is to explore and implement state-of-the-art methods in natural language processing (NLP) to generate coherent and contextually relevant text.
## Introduction
Text generation is a fascinating field within natural language processing that involves creating textual content using machine learning models. This project aims to showcase different techniques and libraries for automatic text generation, providing a starting point for enthusiasts and practitioners interested in this area.
## License
This project is licensed under the MIT License
### Happy CODING...!! 💻
|
beomi/KoAlpaca-KoRWKV-1.5B
|
beomi
| 2024-01-05T13:42:59Z
| 2,307
| 6
|
transformers
|
[
"transformers",
"pytorch",
"rwkv",
"text-generation",
"KoRWKV",
"KoAlpaca",
"ko",
"dataset:KoAlpaca-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2023-05-25T08:46:43Z
|
---
language:
- ko
license: apache-2.0
tags:
- KoRWKV
- KoAlpaca
datasets:
- KoAlpaca-v1.0
pipeline_tag: text-generation
base_model: KoRWKV-1.5B
model-index:
- name: KoAlpaca-KoRWKV-1.5B
results: []
---
> 🚧 Note: this repo is only for demo purpose, current uploaded version is finetuned version of KoRWKV which is ~20% trained ckpt (with ~31Billion tokens) 🚧
# beomi/KoAlpaca-KoRWKV-1.5B (v1.0)
This model is a fine-tuned version of [KoRWKV-1.5B](https://huggingface.co/beomi/KoRWKV-1.5B) on a KoAlpaca Dataset v1.0
Dataset available at [KoAlpaca Github Repository](https://github.com/Beomi/KoAlpaca)
## Training procedure
### Train Device
- A100 80G x2
- ~2hrs
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- seed: 42
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 2.0
- mixed_precision_training: Native AMP fp16
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.10.1
- Tokenizers 0.13.2
|
LoneStriker/Norobara-ZLoss-8x7B-2.4bpw-h6-exl2
|
LoneStriker
| 2024-01-05T13:38:25Z
| 8
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"dataset:LDJnr/Capybara",
"dataset:unalignment/toxic-dpo-v0.1",
"dataset:LDJnr/Verified-Camel",
"dataset:HuggingFaceH4/no_robots",
"dataset:Doctor-Shotgun/no-robots-sharegpt",
"dataset:Doctor-Shotgun/capybara-sharegpt",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] |
text-generation
| 2024-01-05T13:32:38Z
|
---
inference: false
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- mixtral
datasets:
- LDJnr/Capybara
- unalignment/toxic-dpo-v0.1
- LDJnr/Verified-Camel
- HuggingFaceH4/no_robots
- Doctor-Shotgun/no-robots-sharegpt
- Doctor-Shotgun/capybara-sharegpt
---
# Norobara-ZLoss-8x7B
This is an experimental instruct-tuned [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)-based model trained using [Charles Goddard](https://huggingface.co/chargoddard)'s ZLoss and Megablocks-based fork of transformers.
It primarily uses the Capybara and No Robots datasets (thus the name). The goal was to create an uncensored general instruction following model, as well as test various loss implementations while we figure out how the heck to train Mixtral properly.
[Exl2 Quants](https://huggingface.co/royallab/Norobara-ZLoss-8x7B-exl2)
## Usage:
The intended prompt format is a modified multi-turn Alpaca instruction format:
```
### Instruction:
{system prompt}
### Input:
{user message}
### Response:
{model response}
### Input:
{user message}
### Response:
{model response}
(etc.)
```
## Bias, Risks, and Limitations
The model will show biases present in the base model. No ethical alignment was applied to prevent the generation of toxic or harmful outputs (in fact the opposite, with examples from toxic-DPO included), so generate at your own risk.
## Training Details
This model was trained as a QLora adapter for 3 epochs using a single H100 GPU for around 13 hours.
|
vipinbansal179/SetFit_sms_Analyzer5c95292
|
vipinbansal179
| 2024-01-05T13:33:17Z
| 8
| 0
|
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"model-index",
"region:us"
] |
text-classification
| 2023-12-23T21:21:15Z
|
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: receive upi mandate collect request marg techno project private limit inr
15000.00. log google pay app authorize - axis bank
- text: 'sep-23 statement credit card x6343 total due : inr 5575.55 min due : inr
4811.55 due date : 08-oct-23 . pay www.kotak.com/rd/ccpymt - kotak bank'
- text: '< # > use otp : 8233 login turtlemintpro zck+rfoaqnm'
- text: 'arrive today : please use otp-550041 carefully read instructions secure amazon
package ( id : sptp719784310 )'
- text: a/c xxx51941 credit rs 132.00 12-08-2023 - fd1186130010001148int:132.00 tax:0.00.
a/c balance rs 67022.91 .please call 18002082121 query . ujjivan sfb
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/all-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/all-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.9722222222222222
name: Accuracy
---
# SetFit with sentence-transformers/all-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 384 tokens
- **Number of Classes:** 3 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2 | <ul><li>'validity airtel xstream fiber id 20001896982 expire 04-sep-23 . please recharge rs 589 enjoy uninterrupted service . recharge , click www.airtel.in/5/c_summary ? n=021710937343_dsl . please ignore already pay .'</li><li>'initiate process add a/c . xxxx59 upi app - axis bank'</li><li>'google-pay registration initiate icici bank . do , report bank . card details/otp/cvv secret . disclose anyone .'</li></ul> |
| 0 | <ul><li>'rs 260.00 debit a/c xxxxxx7783 credit krjngm @ oksbi upi ref:325154274303. ? call 18005700 -bob'</li><li>'send rs.400.00 kotak bank ac x4524 7800600122 @ ybl 15-10-23.upi ref 328855774953. , kotak.com/fraud'</li><li>'send rs.400.00 kotak bank ac x4524 7800600122 @ ybl 15-10-23.upi ref 328855774953. , kotak.com/fraud'</li></ul> |
| 1 | <ul><li>'dear bob upi user , account credit inr 50.00 date 2023-08-27 11:41:09 upi ref 360562629741 - bob'</li><li>'receive rs.10000.00 kotak bank ac x4524 mahimagyamlani08 @ okaxis 21-08-23.bal:197,838.14.upi ref:323334598750'</li><li>'update ! inr5.66 credit federal bank account xxxx9374 jupiter app . happy bank !'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9722 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("vipinbansal179/SetFit_sms_Analyzer5c95292")
# Run inference
preds = model("< # > use otp : 8233 login turtlemintpro zck+rfoaqnm")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 5 | 20.5357 | 35 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 31 |
| 1 | 28 |
| 2 | 81 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (4, 4)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:-------:|:-------------:|:---------------:|
| 0.0014 | 1 | 0.2939 | - |
| 0.0708 | 50 | 0.1698 | - |
| 0.1416 | 100 | 0.0557 | - |
| 0.2125 | 150 | 0.0614 | - |
| 0.2833 | 200 | 0.0099 | - |
| 0.3541 | 250 | 0.0005 | - |
| 0.4249 | 300 | 0.0002 | - |
| 0.4958 | 350 | 0.0001 | - |
| 0.5666 | 400 | 0.0001 | - |
| 0.6374 | 450 | 0.0001 | - |
| 0.7082 | 500 | 0.0001 | - |
| 0.7790 | 550 | 0.0001 | - |
| 0.8499 | 600 | 0.0002 | - |
| 0.9207 | 650 | 0.0001 | - |
| 0.9915 | 700 | 0.0001 | - |
| **1.0** | **706** | **-** | **0.0312** |
| 1.0623 | 750 | 0.0001 | - |
| 1.1331 | 800 | 0.0001 | - |
| 1.2040 | 850 | 0.0001 | - |
| 1.2748 | 900 | 0.0 | - |
| 1.3456 | 950 | 0.0001 | - |
| 1.4164 | 1000 | 0.0 | - |
| 1.4873 | 1050 | 0.0 | - |
| 1.5581 | 1100 | 0.0 | - |
| 1.6289 | 1150 | 0.0 | - |
| 1.6997 | 1200 | 0.0 | - |
| 1.7705 | 1250 | 0.0 | - |
| 1.8414 | 1300 | 0.0001 | - |
| 1.9122 | 1350 | 0.0 | - |
| 1.9830 | 1400 | 0.0001 | - |
| 2.0 | 1412 | - | 0.0366 |
| 2.0538 | 1450 | 0.0 | - |
| 2.1246 | 1500 | 0.0001 | - |
| 2.1955 | 1550 | 0.0 | - |
| 2.2663 | 1600 | 0.0 | - |
| 2.3371 | 1650 | 0.0 | - |
| 2.4079 | 1700 | 0.0 | - |
| 2.4788 | 1750 | 0.0 | - |
| 2.5496 | 1800 | 0.0 | - |
| 2.6204 | 1850 | 0.0 | - |
| 2.6912 | 1900 | 0.0 | - |
| 2.7620 | 1950 | 0.0 | - |
| 2.8329 | 2000 | 0.0 | - |
| 2.9037 | 2050 | 0.0 | - |
| 2.9745 | 2100 | 0.0 | - |
| 3.0 | 2118 | - | 0.0414 |
| 3.0453 | 2150 | 0.0 | - |
| 3.1161 | 2200 | 0.0 | - |
| 3.1870 | 2250 | 0.0 | - |
| 3.2578 | 2300 | 0.0 | - |
| 3.3286 | 2350 | 0.0 | - |
| 3.3994 | 2400 | 0.0 | - |
| 3.4703 | 2450 | 0.0 | - |
| 3.5411 | 2500 | 0.0 | - |
| 3.6119 | 2550 | 0.0 | - |
| 3.6827 | 2600 | 0.0 | - |
| 3.7535 | 2650 | 0.0 | - |
| 3.8244 | 2700 | 0.0 | - |
| 3.8952 | 2750 | 0.0 | - |
| 3.9660 | 2800 | 0.0 | - |
| 4.0 | 2824 | - | 0.0366 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.0
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
igorwang/mistral-7b-bnb-4bit-citecls-lora
|
igorwang
| 2024-01-05T13:28:42Z
| 2
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/mistral-7b-bnb-4bit",
"base_model:adapter:unsloth/mistral-7b-bnb-4bit",
"region:us"
] | null | 2024-01-05T13:28:31Z
|
---
library_name: peft
base_model: unsloth/mistral-7b-bnb-4bit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Atmpondeck/Dc
|
Atmpondeck
| 2024-01-05T13:28:37Z
| 0
| 0
| null |
[
"license:bigscience-bloom-rail-1.0",
"region:us"
] | null | 2024-01-05T13:28:37Z
|
---
license: bigscience-bloom-rail-1.0
---
|
mwz/zephyr-khaadi
|
mwz
| 2024-01-05T13:27:45Z
| 2
| 0
|
peft
|
[
"peft",
"tensorboard",
"safetensors",
"en",
"license:apache-2.0",
"region:us"
] | null | 2024-01-05T11:53:12Z
|
---
license: apache-2.0
language:
- en
library_name: peft
---
## Usage
Here is an example of how you would load:
```python
import torch
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("mwz/zephyr-khaadi")
inputs = tokenizer(inp_str, return_tensors="pt").to("cuda")
model = AutoPeftModelForCausalLM.from_pretrained(
"mwz/zephyr-khaadi",
low_cpu_mem_usage=True,
return_dict=True,
torch_dtype=torch.float16,
device_map="cuda")
generation_config = GenerationConfig(
do_sample=True,
top_k=1,
temperature=0.1,
max_new_tokens=150,
pad_token_id=tokenizer.eos_token_id
)
def process_data_sample(messages):
processed_example = ""
for message in messages:
role = message["role"]
content = message["content"]
processed_example += f"<|"+role+"|>\n "+content+"\n"
return processed_example
```
Inference can then be performed as usual with HF models as follows:
```python
messages = [
{"role": "system", "content": "You are a Khaadi Social Media Post Generator who helps with user queries or generate him khaadi posts give only three hashtags and be concise as possible dont try to make up."},
{"role": "user", "content": "Generate post on new arrival of winter"},
]
inp_str = process_data_sample(messages)
inputs = tokenizer(inp_str, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, generation_config=generation_config)
asnwer = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(asnwer)
```
Expected output similar to the following:
```
<|system|>
You are a Khaadi Social Media Post Generator who helps with user queries or generate him khaadi posts give only three hashtags and be concise as possible dont try to make up.
<|user|>
Generate post on new arrival of winter
#Khaadi #WinterArrivals #Winter21
Winter is here and we’ve got you covered!
Available in-stores and online
#Khaadi #WinterCollection #Winter2024 #WinterArrivals #Khaadi #KhaadiFabrics #KhaadiHome
```
|
WikiHong/Taxi-v3
|
WikiHong
| 2024-01-05T13:23:36Z
| 0
| 0
| null |
[
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-05T13:23:24Z
|
---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="JackeyXuaner/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
LoneStriker/sqlcoder-34b-alpha-4.0bpw-h6-exl2
|
LoneStriker
| 2024-01-05T13:23:11Z
| 2
| 0
|
transformers
|
[
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-01-05T13:15:36Z
|
---
license: cc-by-4.0
language:
- en
pipeline_tag: text-generation
---
# Defog SQLCoder
**Updated on Nov 14 to reflect benchmarks for SQLCoder-34B**
Defog's SQLCoder is a state-of-the-art LLM for converting natural language questions to SQL queries.
[Interactive Demo](https://defog.ai/sqlcoder-demo/) | [🤗 HF Repo](https://huggingface.co/defog/sqlcoder-34b-alpha) | [♾️ Colab](https://colab.research.google.com/drive/1z4rmOEiFkxkMiecAWeTUlPl0OmKgfEu7?usp=sharing) | [🐦 Twitter](https://twitter.com/defogdata)
## TL;DR
SQLCoder-34B is a 34B parameter model that outperforms `gpt-4` and `gpt-4-turbo` for natural language to SQL generation tasks on our [sql-eval](https://github.com/defog-ai/sql-eval) framework, and significantly outperforms all popular open-source models.
SQLCoder-34B is fine-tuned on a base CodeLlama model.
## Results on novel datasets not seen in training
| model | perc_correct |
|-|-|
| defog-sqlcoder-34b | 84.0 |
| gpt4-turbo-2023-11-09 | 82.5 |
| gpt4-2023-11-09 | 82.5 |
| defog-sqlcoder2 | 77.5 |
| gpt4-2023-08-28 | 74.0 |
| defog-sqlcoder-7b | 71.0 |
| gpt-3.5-2023-10-04 | 66.0 |
| claude-2 | 64.5 |
| gpt-3.5-2023-08-28 | 61.0 |
| claude_instant_1 | 61.0 |
| text-davinci-003 | 52.5 |

## License
The code in this repo (what little there is of it) is Apache-2 licensed. The model weights have a `CC BY-SA 4.0` license. The TL;DR is that you can use and modify the model for any purpose – including commercial use. However, if you modify the weights (for example, by fine-tuning), you must open-source your modified weights under the same license terms.
## Training
Defog was trained on more than 20,000 human-curated questions. These questions were based on 10 different schemas. None of the schemas in the training data were included in our evaluation framework.
You can read more about our [training approach](https://defog.ai/blog/open-sourcing-sqlcoder2-7b/) and [evaluation framework](https://defog.ai/blog/open-sourcing-sqleval/).
## Results by question category
We classified each generated question into one of 5 categories. The table displays the percentage of questions answered correctly by each model, broken down by category.
| | date | group_by | order_by | ratio | join | where |
| -------------- | ---- | -------- | -------- | ----- | ---- | ----- |
| sqlcoder-34b | 80 | 94.3 | 88.6 | 74.3 | 82.9 | 82.9 |
| gpt-4 | 68 | 94.3 | 85.7 | 77.1 | 85.7 | 80 |
| sqlcoder2-15b | 76 | 80 | 77.1 | 60 | 77.1 | 77.1 |
| sqlcoder-7b | 64 | 82.9 | 74.3 | 54.3 | 74.3 | 74.3 |
| gpt-3.5 | 68 | 77.1 | 68.6 | 37.1 | 71.4 | 74.3 |
| claude-2 | 52 | 71.4 | 74.3 | 57.1 | 65.7 | 62.9 |
| claude-instant | 48 | 71.4 | 74.3 | 45.7 | 62.9 | 60 |
| gpt-3 | 32 | 71.4 | 68.6 | 25.7 | 57.1 | 54.3 |
<img width="831" alt="image" src="https://github.com/defog-ai/sqlcoder/assets/5008293/79c5bdc8-373c-4abd-822e-e2c2569ed353">
## Using SQLCoder
You can use SQLCoder via the `transformers` library by downloading our model weights from the Hugging Face repo. We have added sample code for [inference](./inference.py) on a [sample database schema](./metadata.sql).
```bash
python inference.py -q "Question about the sample database goes here"
# Sample question:
# Do we get more revenue from customers in New York compared to customers in San Francisco? Give me the total revenue for each city, and the difference between the two.
```
You can also use a demo on our website [here](https://defog.ai/sqlcoder-demo)
## Hardware Requirements
SQLCoder-34B has been tested on a 4xA10 GPU with `float16` weights. You can also load an 8-bit and 4-bit quantized version of the model on consumer GPUs with 20GB or more of memory – like RTX 4090, RTX 3090, and Apple M2 Pro, M2 Max, or M2 Ultra Chips with 20GB or more of memory.
## Todo
- [x] Open-source the v1 model weights
- [x] Train the model on more data, with higher data variance
- [ ] Tune the model further with Reward Modelling and RLHF
- [ ] Pretrain a model from scratch that specializes in SQL analysis
|
rolmez/t5-small-finetuned-xsum
|
rolmez
| 2024-01-05T13:08:08Z
| 11
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-05T09:42:07Z
|
---
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
Poulpidot/wav2vec2-large-xlsr-53-french-onnx
|
Poulpidot
| 2024-01-05T13:07:08Z
| 1
| 0
|
transformers
|
[
"transformers",
"onnx",
"wav2vec2",
"automatic-speech-recognition",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-01-02T19:38:47Z
|
Converted to ONNX from :
https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-french
|
karawalla/mistral_b_karawalla_shiptraining24001
|
karawalla
| 2024-01-05T13:05:50Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-01-05T05:23:49Z
|
---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
ostapeno/newt_adaNeo1B_high_school_psychology_sbs0.5_svdemb_sgd_full_ft_finegrained
|
ostapeno
| 2024-01-05T12:56:44Z
| 0
| 0
| null |
[
"region:us"
] | null | 2024-01-05T04:18:47Z
|
Number of experts present in the library: 3
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| high_school_psychology_v1 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/high_school_psychology | lora |
| high_school_psychology_v2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/high_school_psychology | lora |
| high_school_psychology | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/high_school_psychology | lora |
Last updated on: 2024-01-05 12:56:44+00:00
|
Dagonez/bert-finetuned-squad
|
Dagonez
| 2024-01-05T12:54:37Z
| 12
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2023-12-29T15:40:07Z
|
---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ostapeno/newt_adaNeo1B_niv2_dialogue_act_recognition_sbs0.5_svdemb_sgd_full_ft_finegrained
|
ostapeno
| 2024-01-05T12:51:44Z
| 0
| 0
| null |
[
"region:us"
] | null | 2024-01-05T08:49:35Z
|
Number of experts present in the library: 6
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| niv2_dialogue_act_recognition_v1 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_dialogue_act_recognition | lora |
| niv2_dialogue_act_recognition | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_dialogue_act_recognition | lora |
| niv2_dialogue_act_recognition_v4 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_dialogue_act_recognition | lora |
| niv2_dialogue_act_recognition_v3 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_dialogue_act_recognition | lora |
| niv2_dialogue_act_recognition_v2 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_dialogue_act_recognition | lora |
| niv2_dialogue_act_recognition_v5 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/niv2_dialogue_act_recognition | lora |
Last updated on: 2024-01-05 12:51:44+00:00
|
TheBloke/Norobara-ZLoss-8x7B-AWQ
|
TheBloke
| 2024-01-05T12:45:09Z
| 9
| 3
|
transformers
|
[
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"dataset:LDJnr/Capybara",
"dataset:unalignment/toxic-dpo-v0.1",
"dataset:LDJnr/Verified-Camel",
"dataset:HuggingFaceH4/no_robots",
"dataset:Doctor-Shotgun/no-robots-sharegpt",
"dataset:Doctor-Shotgun/capybara-sharegpt",
"base_model:Doctor-Shotgun/Norobara-ZLoss-8x7B",
"base_model:quantized:Doctor-Shotgun/Norobara-ZLoss-8x7B",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-01-05T12:00:42Z
|
---
base_model: Doctor-Shotgun/Norobara-ZLoss-8x7B
datasets:
- LDJnr/Capybara
- unalignment/toxic-dpo-v0.1
- LDJnr/Verified-Camel
- HuggingFaceH4/no_robots
- Doctor-Shotgun/no-robots-sharegpt
- Doctor-Shotgun/capybara-sharegpt
inference: false
language:
- en
library_name: transformers
model_creator: Doctor Shotgun
model_name: Norobara ZLoss 8X7B
model_type: mixtral
pipeline_tag: text-generation
prompt_template: '### Instruction:
{system_message}
### Input:
{prompt}
### Response:
'
quantized_by: TheBloke
tags:
- mixtral
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Norobara ZLoss 8X7B - AWQ
- Model creator: [Doctor Shotgun](https://huggingface.co/Doctor-Shotgun)
- Original model: [Norobara ZLoss 8X7B](https://huggingface.co/Doctor-Shotgun/Norobara-ZLoss-8x7B)
<!-- description start -->
## Description
This repo contains AWQ model files for [Doctor Shotgun's Norobara ZLoss 8X7B](https://huggingface.co/Doctor-Shotgun/Norobara-ZLoss-8x7B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
**MIXTRAL AWQ**
This is a Mixtral AWQ model.
For AutoAWQ inference, please install AutoAWQ 0.1.8 or later.
Support via Transformers is also available, but currently requires installing Transformers from Github: `pip3 install git+https://github.com/huggingface/transformers.git`
vLLM: version 0.2.6 is confirmed to support Mixtral AWQs.
TGI: I tested version 1.3.3 and it loaded the model fine, but I was not able to get any output back. Further testing/debug is required. (Let me know if you get it working!)
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
AWQ models are supported by (note that not all of these may support Mixtral models yet - see above):
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Norobara-ZLoss-8x7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Norobara-ZLoss-8x7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Norobara-ZLoss-8x7B-GGUF)
* [Doctor Shotgun's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Doctor-Shotgun/Norobara-ZLoss-8x7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: Instruction-Input-Response
```
### Instruction:
{system_message}
### Input:
{prompt}
### Response:
```
<!-- prompt-template end -->
<!-- README_AWQ.md-provided-files start -->
## Provided files, and AWQ parameters
I currently release 128g GEMM models only. The addition of group_size 32 models, and GEMV kernel models, is being actively considered.
Models are released as sharded safetensors files.
| Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
| ------ | ---- | -- | ----------- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/Norobara-ZLoss-8x7B-AWQ/tree/main) | 4 | 128 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 8192 | 24.65 GB
<!-- README_AWQ.md-provided-files end -->
<!-- README_AWQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/Norobara-ZLoss-8x7B-AWQ`.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `Norobara-ZLoss-8x7B-AWQ`
7. Select **Loader: AutoAWQ**.
8. Click Load, and the model will load and is now ready for use.
9. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
10. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_AWQ.md-text-generation-webui end -->
<!-- README_AWQ.md-use-from-vllm start -->
## Multi-user inference server: vLLM
Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
- Please ensure you are using vLLM version 0.2 or later.
- When using vLLM as a server, pass the `--quantization awq` parameter.
For example:
```shell
python3 -m vllm.entrypoints.api_server --model TheBloke/Norobara-ZLoss-8x7B-AWQ --quantization awq --dtype auto
```
- When using vLLM from Python code, again set `quantization=awq`.
For example:
```python
from vllm import LLM, SamplingParams
prompts = [
"Tell me about AI",
"Write a story about llamas",
"What is 291 - 150?",
"How much wood would a woodchuck chuck if a woodchuck could chuck wood?",
]
prompt_template=f'''### Instruction:
{system_message}
### Input:
{prompt}
### Response:
'''
prompts = [prompt_template.format(prompt=prompt) for prompt in prompts]
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
llm = LLM(model="TheBloke/Norobara-ZLoss-8x7B-AWQ", quantization="awq", dtype="auto")
outputs = llm.generate(prompts, sampling_params)
# Print the outputs.
for output in outputs:
prompt = output.prompt
generated_text = output.outputs[0].text
print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
```
<!-- README_AWQ.md-use-from-vllm start -->
<!-- README_AWQ.md-use-from-tgi start -->
## Multi-user inference server: Hugging Face Text Generation Inference (TGI)
Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/Norobara-ZLoss-8x7B-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires [huggingface-hub](https://github.com/huggingface/huggingface_hub) 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''### Instruction:
{system_message}
### Input:
{prompt}
### Response:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(prompt,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1)
print(f"Model output: ", response)
```
<!-- README_AWQ.md-use-from-tgi end -->
<!-- README_AWQ.md-use-from-python start -->
## Inference from Python code using Transformers
### Install the necessary packages
- Requires: [Transformers](https://huggingface.co/docs/transformers) 4.35.0 or later.
- Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.6 or later.
```shell
pip3 install --upgrade "autoawq>=0.1.6" "transformers>=4.35.0"
```
Note that if you are using PyTorch 2.0.1, the above AutoAWQ command will automatically upgrade you to PyTorch 2.1.0.
If you are using CUDA 11.8 and wish to continue using PyTorch 2.0.1, instead run this command:
```shell
pip3 install https://github.com/casper-hansen/AutoAWQ/releases/download/v0.1.6/autoawq-0.1.6+cu118-cp310-cp310-linux_x86_64.whl
```
If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
```shell
pip3 uninstall -y autoawq
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip3 install .
```
### Transformers example code (requires Transformers 4.35.0 and later)
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name_or_path = "TheBloke/Norobara-ZLoss-8x7B-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(
model_name_or_path,
low_cpu_mem_usage=True,
device_map="cuda:0"
)
# Using the text streamer to stream output one token at a time
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
prompt = "Tell me about AI"
prompt_template=f'''### Instruction:
{system_message}
### Input:
{prompt}
### Response:
'''
# Convert prompt to tokens
tokens = tokenizer(
prompt_template,
return_tensors='pt'
).input_ids.cuda()
generation_params = {
"do_sample": True,
"temperature": 0.7,
"top_p": 0.95,
"top_k": 40,
"max_new_tokens": 512,
"repetition_penalty": 1.1
}
# Generate streamed output, visible one token at a time
generation_output = model.generate(
tokens,
streamer=streamer,
**generation_params
)
# Generation without a streamer, which will include the prompt in the output
generation_output = model.generate(
tokens,
**generation_params
)
# Get the tokens from the output, decode them, print them
token_output = generation_output[0]
text_output = tokenizer.decode(token_output)
print("model.generate output: ", text_output)
# Inference is also possible via Transformers' pipeline
from transformers import pipeline
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
**generation_params
)
pipe_output = pipe(prompt_template)[0]['generated_text']
print("pipeline output: ", pipe_output)
```
<!-- README_AWQ.md-use-from-python end -->
<!-- README_AWQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with:
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui) using `Loader: AutoAWQ`.
- [vLLM](https://github.com/vllm-project/vllm) version 0.2.0 and later.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) version 1.1.0 and later.
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later.
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) version 0.1.1 and later.
<!-- README_AWQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: Doctor Shotgun's Norobara ZLoss 8X7B
# Norobara-ZLoss-8x7B
This is an experimental instruct-tuned [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1)-based model trained using [Charles Goddard](https://huggingface.co/chargoddard)'s ZLoss and Megablocks-based fork of transformers.
It primarily uses the Capybara and No Robots datasets (thus the name). The goal was to create an uncensored general instruction following model, as well as test various loss implementations while we figure out how the heck to train Mixtral properly.
[Exl2 Quants](https://huggingface.co/royallab/Norobara-ZLoss-8x7B-exl2)
## Usage:
The intended prompt format is a modified multi-turn Alpaca instruction format:
```
### Instruction:
{system prompt}
### Input:
{user message}
### Response:
{model response}
### Input:
{user message}
### Response:
{model response}
(etc.)
```
## Bias, Risks, and Limitations
The model will show biases present in the base model. No ethical alignment was applied to prevent the generation of toxic or harmful outputs (in fact the opposite, with examples from toxic-DPO included), so generate at your own risk.
## Training Details
This model was trained as a QLora adapter for 3 epochs using a single H100 GPU for around 13 hours.
|
salam123/Llama_1
|
salam123
| 2024-01-05T12:44:16Z
| 2
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:NousResearch/Llama-2-7b-chat-hf",
"base_model:adapter:NousResearch/Llama-2-7b-chat-hf",
"region:us"
] | null | 2024-01-05T12:42:22Z
|
---
library_name: peft
base_model: NousResearch/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Koyored/dqn-SpaceInvadersNoFrameskip-v4
|
Koyored
| 2024-01-05T12:43:15Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-05T12:42:54Z
|
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 6.50 +/- 10.74
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Koyored -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Koyored -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Koyored
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
anismahmahi/appeal-to-authority-setfit-model
|
anismahmahi
| 2024-01-05T12:34:45Z
| 6
| 0
|
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] |
text-classification
| 2024-01-05T12:34:25Z
|
---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- f1
widget:
- text: 'Pointing out the glaring nature of the smear campaign was the fact that there
has been absolutely zero information released about the warrants conducted on
officer Amber Guyger, the killer cop who lived just below Jean.
'
- text: 'Ganesh makes wild leaps and inferences.
'
- text: 'But during his 2004 campaign for the Senate, Obama and his corrupt party
in Chicago somehow managed to unseal the divorce records of his opponent Jack
Ryan, who was leading by a large margin.
'
- text: 'Trump has only the “deplorables,” and they are unorganized and will experience
retribution once Trump is removed.
'
- text: '“Al Franken must be held accountable if our party wants to live up to our
commitment to women & girls.”
'
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: f1
value: 0.2236842105263158
name: F1
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | <ul><li>'“They know this is one of the great scandals in the history of our country because basically what they did is, they used [former Trump campaign aide] Carter Page, who nobody even knew, who I feel very badly for, I think he’s been treated very badly.\n'</li><li>'The Guardian did not make a mistake in vilifying Assange without a shred of evidence.\n'</li><li>'He himself said: “No one defends Islam like Arab Christians.” It is to defend Islam that Western clerics do not raise their voice against such acts of brutality.\n'</li></ul> |
| 1 | <ul><li>'As the political scientist Richard Neustadt said, political elites are constantly evaluating and re-evaluating the president.\n'</li><li>'“I can tell you 100% this is not that kind of guy,” said Rick, adding that he would see Paddock every other day and that the two would go to a local bar and play slot machines.\n'</li><li>'Now, new information released by investigative reporter Laura Loomer proves that authorities have directly lied to the American people about the case at least once by claiming that supposed shooter Stephen Paddock checked into the Mandalay Bay Hotel on September 28th when valet records (with photos) prove he actually arrived three days earlier.\n'</li></ul> |
## Evaluation
### Metrics
| Label | F1 |
|:--------|:-------|
| **all** | 0.2237 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("anismahmahi/appeal-to-authority-setfit-model")
# Run inference
preds = model("Ganesh makes wild leaps and inferences.
")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 2 | 28.8867 | 111 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 452 |
| 1 | 113 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (2, 2)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:--------:|:-------------:|:---------------:|
| 0.0007 | 1 | 0.3148 | - |
| 0.0354 | 50 | 0.2792 | - |
| 0.0708 | 100 | 0.1707 | - |
| 0.1062 | 150 | 0.1197 | - |
| 0.1415 | 200 | 0.0768 | - |
| 0.1769 | 250 | 0.0406 | - |
| 0.2123 | 300 | 0.0053 | - |
| 0.2477 | 350 | 0.0571 | - |
| 0.2831 | 400 | 0.0324 | - |
| 0.3185 | 450 | 0.001 | - |
| 0.3539 | 500 | 0.077 | - |
| 0.3892 | 550 | 0.0002 | - |
| 0.4246 | 600 | 0.0011 | - |
| 0.4600 | 650 | 0.003 | - |
| 0.4954 | 700 | 0.0004 | - |
| 0.5308 | 750 | 0.0004 | - |
| 0.5662 | 800 | 0.0006 | - |
| 0.6016 | 850 | 0.0002 | - |
| 0.6369 | 900 | 0.0002 | - |
| 0.6723 | 950 | 0.0003 | - |
| 0.7077 | 1000 | 0.0116 | - |
| 0.7431 | 1050 | 0.0059 | - |
| 0.7785 | 1100 | 0.0002 | - |
| 0.8139 | 1150 | 0.0001 | - |
| 0.8493 | 1200 | 0.0001 | - |
| 0.8846 | 1250 | 0.0003 | - |
| 0.9200 | 1300 | 0.0001 | - |
| 0.9554 | 1350 | 0.0 | - |
| 0.9908 | 1400 | 0.0125 | - |
| **1.0** | **1413** | **-** | **0.2868** |
| 1.0262 | 1450 | 0.0003 | - |
| 1.0616 | 1500 | 0.0002 | - |
| 1.0970 | 1550 | 0.0001 | - |
| 1.1323 | 1600 | 0.0002 | - |
| 1.1677 | 1650 | 0.0001 | - |
| 1.2031 | 1700 | 0.0001 | - |
| 1.2385 | 1750 | 0.0038 | - |
| 1.2739 | 1800 | 0.0001 | - |
| 1.3093 | 1850 | 0.0065 | - |
| 1.3447 | 1900 | 0.0002 | - |
| 1.3800 | 1950 | 0.0002 | - |
| 1.4154 | 2000 | 0.0197 | - |
| 1.4508 | 2050 | 0.0061 | - |
| 1.4862 | 2100 | 0.0001 | - |
| 1.5216 | 2150 | 0.0 | - |
| 1.5570 | 2200 | 0.0321 | - |
| 1.5924 | 2250 | 0.0002 | - |
| 1.6277 | 2300 | 0.0331 | - |
| 1.6631 | 2350 | 0.0069 | - |
| 1.6985 | 2400 | 0.0001 | - |
| 1.7339 | 2450 | 0.0 | - |
| 1.7693 | 2500 | 0.0 | - |
| 1.8047 | 2550 | 0.0337 | - |
| 1.8401 | 2600 | 0.0347 | - |
| 1.8754 | 2650 | 0.0612 | - |
| 1.9108 | 2700 | 0.0398 | - |
| 1.9462 | 2750 | 0.0001 | - |
| 1.9816 | 2800 | 0.0001 | - |
| 2.0 | 2826 | - | 0.2926 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.1
- Sentence Transformers: 2.2.2
- Transformers: 4.35.2
- PyTorch: 2.1.0+cu121
- Datasets: 2.16.1
- Tokenizers: 0.15.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
MyPdfChat/MyPdfChat
|
MyPdfChat
| 2024-01-05T12:27:22Z
| 0
| 1
| null |
[
"region:us"
] | null | 2023-12-16T08:39:50Z
|
---
{}
---
# MyPdfChat - Private PDF Chat based on LLM can run on any PC.
**MyPdfChat** is using a private 7B RWKV language model designed to run locally and facilitate secure PDF-based chat conversations. With RWKV, you can have confidential and encrypted conversations in PDF format, ensuring the privacy of your discussions.
## Features
- **Privacy**: MyPdfChat runs locally on your machine, ensuring that your conversations remain private and secure.
- **PDF Chat**: MyPdfChat enables you to have chat conversations within PDF documents, providing a unique and secure communication method.
- **Encryption**: All chat messages are encrypted to protect the confidentiality of your discussions.
- **Offline Access**: Since RWKV runs locally, you can use it even without an internet connection.
## Installation
- To install MyPdfChat from the release, follow these instructions:
- ### Step 1:
- Download the Release1. Go to the [Mychatpdf huggingface repo](https://huggingface.co/MyPdfChat/MyPdfChat).2. Download the latest release zip (`mychatpdf-vX.X.X.zip`).
- ### Step 2:
- Extract the Release1. Locate the downloaded `mychatpdf-vX.X.X.zip` file on your system.2. Extract the contents of the zip file to a directory of your choice.
- ### Step 3:
- double click the MyPdfchat.exe
## Usage1.
- chat with you PDF file
## Contributions
- Contributions to MyPdfChat are welcome! If you encounter any issues or have suggestions for improvements, please open an issue on the [GitHub repository](https://github.com/mypdfchat/MypdfChat).
## License
- MyPdfChat is released under the [MIT License](https://opensource.org/licenses/MIT).
## Acknowledgements
- We would like to thank the open-source community for their invaluable contributions to the development of MyPdfChat.
## Contact
- For any inquiries or support, please contact us at [email protected]---Thank you for using RWKV! We hope you enjoy your private and secure PDF chat experience.
|
NLPProject2023Z/roberta-pretrained
|
NLPProject2023Z
| 2024-01-05T12:23:15Z
| 5
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-05T12:22:29Z
|
---
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: roberta-pretrained
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-pretrained
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 48
- eval_batch_size: 8
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Tokenizers 0.15.0
|
aikay/distilbert-base-uncased-finetuned-imdb
|
aikay
| 2024-01-05T12:17:19Z
| 45
| 0
|
transformers
|
[
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2024-01-05T11:59:15Z
|
---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: aikay/distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# aikay/distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.8619
- Validation Loss: 2.5743
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.8619 | 2.5743 | 0 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
ostapeno/newt_adaNeo1B_high_school_psychology_sbs0.5_svdemb_sgd_full_ft_coarsegrained
|
ostapeno
| 2024-01-05T12:11:47Z
| 0
| 0
| null |
[
"region:us"
] | null | 2024-01-04T19:52:11Z
|
Number of experts present in the library: 2
| Expert Name | Base Model | Trained on | Adapter Type |
| --- | --- | --- | --- |
| high_school_psychology_v1 | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/high_school_psychology | lora |
| high_school_psychology | EleutherAI/gpt-neo-1.3B | ostapeno/adauni-v3-10k-flat/high_school_psychology | lora |
Last updated on: 2024-01-05 12:11:47+00:00
|
HelloJiang/LAION-CLIP-ConvNeXt-Large-512
|
HelloJiang
| 2024-01-05T12:01:38Z
| 5
| 1
|
open_clip
|
[
"open_clip",
"pytorch",
"convnext",
"zero-shot-image-classification",
"clip",
"arxiv:2201.03545",
"arxiv:2210.08402",
"arxiv:1910.04867",
"license:mit",
"region:us"
] |
zero-shot-image-classification
| 2024-01-05T09:14:17Z
|
---
tags:
- zero-shot-image-classification
- clip
license: mit
library_name: open_clip
pipeline_tag: zero-shot-image-classification
---
# Model card for CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup
# Table of Contents
1. [Model Details](#model-details)
2. [Uses](#uses)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)
5. [Acknowledgements](#acknowledgements)
6. [Citation](#citation)
# Model Details
## Model Description
A series of CLIP [ConvNeXt-Large](https://arxiv.org/abs/2201.03545) (w/ extra text depth, vision MLP head) models trained on the LAION-2B (english) subset of [LAION-5B](https://arxiv.org/abs/2210.08402) using [OpenCLIP](https://github.com/mlfoundations/open_clip).
The models utilize:
* the [timm](https://github.com/rwightman/pytorch-image-models) ConvNeXt-Large model (`convnext_large`) as the image tower
* a MLP (`fc - gelu - drop - fc`) head in vision tower instead of the single projection of other CLIP models
* a text tower with same width but 4 layers more depth than ViT-L / RN50x16 models (depth 16, embed dim 768).
This 320x320 resolution model is a soup (weight average) of 3 fine-tunes of [CLIP-convnext_large_d.laion2B-s26B-b102K-augreg](https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg) at a higher resolution. It is an average of 3 fine-tunes from the final checkpoint of the original 256x256 training run w/ an additional ~2-3B samples for each fine-tune and a lower learning rate. Each fine-tune was a different learning rate (1e-4, 6e-5, 5e-5), and diff # of samples (3.2B, 2B, 2.5B).
At 320x320, the ConvNext-Large-D is significantly more efficient than the L/14 model at 336x336 that OpenAI fine-tuned. L/14-336 model is 2.5x more GMAC, 2.8x more activations, and 1.22x more parameters.
| Model | Dataset | Resolution | AugReg | Top-1 ImageNet Zero-Shot (%) |
| ----- | ------- | ---------- | ------------ | --------- |
| [convnext_large_d.laion2b_s26b_b102k-augreg](https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg) | LAION-2B | 256x256 | RRC (0.33, 1.0), RE (0.35), SD (0.1), D(0.1) | 75.9 |
| [convnext_large_d_320.laion2b_s29b_b131k-ft](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft) | LAION-2B | 320x320 | RRC (0.5, 1.0), RE (0.4), SD (0.1), D(0.0) | 76.6 |
| [convnext_large_d_320.laion2b_s29b_b131k-ft-soup](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup) | LAION-2B | 320x320 | RRC (0.5, 1.0), RE (0.4), SD (0.1), D(0.0) | 76.9 |
RRC = Random Resize Crop (crop pcts), RE = Random Erasing (prob), SD = Stochastic Depth (prob) -- image tower only, D = Dropout (prob) -- image tower head only
LAION-A = LAION Aesthetic, an ~900M sample subset of LAION-2B with pHash dedupe and asthetic score filtering.
Model training done by Ross Wightman on the [stability.ai](https://stability.ai/) cluster.
# Uses
As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset.
## Direct Use
Zero-shot image classification, image and text retrieval, among others.
## Downstream Use
Image classification and other image task fine-tuning, linear probe image classification, image generation guiding and conditioning, among others.
## Out-of-Scope Use
As per the OpenAI models,
**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
Further the above notice, the LAION-5B dataset used in training of these models has additional considerations, see below.
# Training Details
## Training Data
This model was trained with LAION-2B -- A 2 billion sample English subset of LAION-5B (https://laion.ai/blog/laion-5b/).
**IMPORTANT NOTE:** The motivation behind dataset creation is to democratize research and experimentation around large-scale multi-modal model training and handling of uncurated, large-scale datasets crawled from publically available internet. Our recommendation is therefore to use the dataset for research purposes. Be aware that this large-scale dataset is uncurated. Keep in mind that the uncurated nature of the dataset means that collected links may lead to strongly discomforting and disturbing content for a human viewer. Therefore, please use the demo links with caution and at your own risk. It is possible to extract a “safe” subset by filtering out samples based on the safety tags (using a customized trained NSFW classifier that we built). While this strongly reduces the chance for encountering potentially harmful content when viewing, we cannot entirely exclude the possibility for harmful content being still present in safe mode, so that the warning holds also there. We think that providing the dataset openly to broad research and other interested communities will allow for transparent investigation of benefits that come along with training large-scale models as well as pitfalls and dangers that may stay unreported or unnoticed when working with closed large datasets that remain restricted to a small community. Providing our dataset openly, we however do not recommend using it for creating ready-to-go industrial products, as the basic research about general properties and safety of such large-scale models, which we would like to encourage with this release, is still in progress.
## Training Procedure
All 320x320 model fine-tunes were trained with a global batch size of 131072 for 10-16 checkpoint intervals of 203.7M samples for a total of ~2-3B samples seen over fine-tune.
For 320x320 models, a slurm script w/ srun below was used on 64 8-GPU (A100 40GB) nodes (Stability).
```
/opt/slurm/sbin/srun --cpu_bind=v --accel-bind=gn python -m training.main \
--save-frequency 1 \
--name "convnext_large_320" \
--pretrained ""/runs/convnext_large_256/epoch_128.pt" \
--resume 'latest' \
--train-data="pipe:aws s3 cp s3://mybucket/path/{laion{00000..xxxxx}.tar -" \
--train-num-samples 203666042 \
--dataset-type webdataset \
--precision amp_bfloat16 \
--beta2 0.98 \
--warmup 2000 \
--batch-size=256 \
--epochs=12 \
--dataset-resampled \
--aug-cfg use_timm=True scale='(0.5, 1.0)' re_prob=0.4 \
--clip-grad-norm 5.0 \
--lr 5e-5 \
--workers=6 \
--model "convnext_large_d_320" \
--seed 0 \
--ddp-static-graph \
--local-loss \
--gather-with-grad \
--grad-checkpointing
```
# Evaluation
Evaluation done with code in the [LAION CLIP Benchmark suite](https://github.com/LAION-AI/CLIP_benchmark).
## Testing Data, Factors & Metrics
### Testing Data
The testing is performed with VTAB+ (A combination of VTAB (https://arxiv.org/abs/1910.04867) w/ additional robustness datasets) for classification and COCO and Flickr for retrieval.
## Results
The models achieve between 75.9 and 76.9 top-1 zero-shot accuracy on ImageNet-1k.
Zero-shot curve of origina from-scratch 256x256 training:

An initial round of benchmarks have been performed on a wider range of datasets, to be viewable at https://github.com/LAION-AI/CLIP_benchmark/blob/main/benchmark/results.ipynb
# Acknowledgements
Acknowledging [stability.ai](https://stability.ai/) for compute used to train this model.
# Citation
**BibTeX:**
LAION-5B
```bibtex
@inproceedings{schuhmann2022laionb,
title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
author={Christoph Schuhmann and
Romain Beaumont and
Richard Vencu and
Cade W Gordon and
Ross Wightman and
Mehdi Cherti and
Theo Coombes and
Aarush Katta and
Clayton Mullis and
Mitchell Wortsman and
Patrick Schramowski and
Srivatsa R Kundurthy and
Katherine Crowson and
Ludwig Schmidt and
Robert Kaczmarczyk and
Jenia Jitsev},
booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
year={2022},
url={https://openreview.net/forum?id=M3Y74vmsMcY}
}
```
OpenCLIP software
```bibtex
@software{ilharco_gabriel_2021_5143773,
author = {Ilharco, Gabriel and
Wortsman, Mitchell and
Wightman, Ross and
Gordon, Cade and
Carlini, Nicholas and
Taori, Rohan and
Dave, Achal and
Shankar, Vaishaal and
Namkoong, Hongseok and
Miller, John and
Hajishirzi, Hannaneh and
Farhadi, Ali and
Schmidt, Ludwig},
title = {OpenCLIP},
month = jul,
year = 2021,
note = {If you use this software, please cite it as below.},
publisher = {Zenodo},
version = {0.1},
doi = {10.5281/zenodo.5143773},
url = {https://doi.org/10.5281/zenodo.5143773}
}
```
```
@InProceedings{pmlr-v162-wortsman22a,
title = {Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time},
author = {Wortsman, Mitchell and Ilharco, Gabriel and Gadre, Samir Ya and Roelofs, Rebecca and Gontijo-Lopes, Raphael and Morcos, Ari S and Namkoong, Hongseok and Farhadi, Ali and Carmon, Yair and Kornblith, Simon and Schmidt, Ludwig},
booktitle = {Proceedings of the 39th International Conference on Machine Learning},
pages = {23965--23998},
year = {2022},
editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan},
volume = {162},
series = {Proceedings of Machine Learning Research},
month = {17--23 Jul},
publisher = {PMLR},
pdf = {https://proceedings.mlr.press/v162/wortsman22a/wortsman22a.pdf},
url = {https://proceedings.mlr.press/v162/wortsman22a.html}
}
```
OpenAI CLIP paper
```bibtex
@inproceedings{Radford2021LearningTV,
title={Learning Transferable Visual Models From Natural Language Supervision},
author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
booktitle={ICML},
year={2021}
}
```
```bibtex
@Article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
|
kenchenxingyu/flan-large-lora-stance
|
kenchenxingyu
| 2024-01-05T11:48:11Z
| 1
| 0
|
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/flan-t5-large",
"base_model:adapter:google/flan-t5-large",
"region:us"
] | null | 2024-01-04T13:16:25Z
|
---
library_name: peft
base_model: google/flan-t5-large
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0
|
vivekatzerebral/distilbert-base-uncased
|
vivekatzerebral
| 2024-01-05T11:47:39Z
| 6
| 0
|
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-01-04T11:51:04Z
|
---
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.93
- name: F1
type: f1
value: 0.930205100854519
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased
This model was trained from scratch on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1919
- Accuracy: 0.93
- F1: 0.9302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0756 | 1.0 | 250 | 0.1902 | 0.93 | 0.9304 |
| 0.0641 | 2.0 | 500 | 0.1968 | 0.939 | 0.9395 |
| 0.0507 | 3.0 | 750 | 0.1919 | 0.93 | 0.9302 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.3.0.dev20240104
- Datasets 2.16.1
- Tokenizers 0.15.0
|
LLParallax/sample_factory_human_monk
|
LLParallax
| 2024-01-05T11:46:42Z
| 0
| 0
|
sample-factory
|
[
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-05T11:31:11Z
|
---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: nethack_challenge
type: nethack_challenge
metrics:
- type: mean_reward
value: 3245.47 +/- 2691.37
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **nethack_challenge** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r LLParallax/sample_factory_human_monk
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m sf_examples.nethack.enjoy_nethack --env=nethack_challenge --train_dir=./train_dir --experiment=sample_factory_human_monk
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m sf_examples.nethack.train_nethack --env=nethack_challenge --character=mon-hum-neu-mal --num_workers=16 --num_envs_per_worker=32 batch_size=4096 --train_dir=./train_dir --experiment=sample_factory_human_monk --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
bmistry4/reimplemented-ppo-LunarLander-v2
|
bmistry4
| 2024-01-05T11:43:36Z
| 0
| 0
| null |
[
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-05T11:03:45Z
|
---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 171.71 +/- 107.45
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 1000000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.999
'gae_lambda': 0.95
'num_minibatches': 16
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'bmistry4/reimplemented-ppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 32}
```
|
mateiaass/SummarRo
|
mateiaass
| 2024-01-05T11:19:24Z
| 0
| 0
| null |
[
"summarization",
"region:us"
] |
summarization
| 2024-01-05T09:53:47Z
|
---
metrics:
- rouge
pipeline_tag: summarization
---
|
kaurm/vit-base-patch16-224-in21k-finetuned-lora-food101
|
kaurm
| 2024-01-05T11:15:31Z
| 2
| 0
|
peft
|
[
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:adapter:google/vit-base-patch16-224-in21k",
"region:us"
] | null | 2024-01-05T11:15:26Z
|
---
library_name: peft
base_model: google/vit-base-patch16-224-in21k
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1
|
Darainer2050/PPO-LunarLander-v2
|
Darainer2050
| 2024-01-05T11:08:58Z
| 0
| 0
|
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2024-01-04T23:43:37Z
|
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.23 +/- 21.90
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
increased the model variants to 32,
env = make_vec_env('LunarLander-v2', n_envs=32)
this improved the performance somewhat from the standard example
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
litfeng/outzh
|
litfeng
| 2024-01-05T11:06:47Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-01-05T05:33:47Z
|
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of zh
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - litfeng/outzh
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of zh using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
ntc-ai/SDXL-LoRA-slider.vampire
|
ntc-ai
| 2024-01-05T11:06:10Z
| 106
| 1
|
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2024-01-05T11:06:07Z
|
---
language:
- en
thumbnail: "images/evaluate/vampire.../vampire_17_3.0.png"
widget:
- text: vampire
output:
url: images/vampire_17_3.0.png
- text: vampire
output:
url: images/vampire_19_3.0.png
- text: vampire
output:
url: images/vampire_20_3.0.png
- text: vampire
output:
url: images/vampire_21_3.0.png
- text: vampire
output:
url: images/vampire_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "vampire"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - vampire (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/vampire_17_-3.0.png" width=256 height=256 /> | <img src="images/vampire_17_0.0.png" width=256 height=256 /> | <img src="images/vampire_17_3.0.png" width=256 height=256 /> |
| <img src="images/vampire_19_-3.0.png" width=256 height=256 /> | <img src="images/vampire_19_0.0.png" width=256 height=256 /> | <img src="images/vampire_19_3.0.png" width=256 height=256 /> |
| <img src="images/vampire_20_-3.0.png" width=256 height=256 /> | <img src="images/vampire_20_0.0.png" width=256 height=256 /> | <img src="images/vampire_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
vampire
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.vampire', weight_name='vampire.safetensors', adapter_name="vampire")
# Activate the LoRA
pipe.set_adapters(["vampire"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, vampire"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 880+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
anhdt-dsai-02/ViT5_base_1024_5_1
|
anhdt-dsai-02
| 2024-01-05T11:05:19Z
| 4
| 0
|
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-01-05T10:10:49Z
|
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: ViT5_base_1024_5_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ViT5_base_1024_5_1
This model is a fine-tuned version of [VietAI/vit5-base-vietnews-summarization](https://huggingface.co/VietAI/vit5-base-vietnews-summarization) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
astromis/rubert_reply_recovery
|
astromis
| 2024-01-05T11:01:57Z
| 13
| 0
|
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"bert",
"next-sentence-prediction",
"russian",
"conversation",
"chats",
"embeddings",
"coherence",
"ru",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2023-06-12T13:15:41Z
|
---
license: mit
language:
- ru
metrics:
- f1
library_name: transformers
tags:
- russian
- conversation
- chats
- embeddings
- coherence
---
# Model Card
This model is trained to predict whether two given messages from some group chat with many members can have a `reply_to` relationship.
# Training details
It's based on [Conversational RuBERT](https://docs.deeppavlov.ai/en/master/features/models/bert.html) (cased, 12-layer, 768-hidden, 12-heads, 180M parameters) that was trained on several social media datasets. We fine-tuned it with the data from several Telegram chats. The positive `reply_to` examples were obtained by natural user annotation. The negative ones were obtained by shuffling the messages.
The task perfectly aligns with the Next Sentence Prediction task, so the fine-tuning was done in that manner.
It achieves the 0.83 F1 score on the gold test set from our [reply recovery dataset](https://data.mendeley.com/datasets/xm86yszck2).
See the [paper](https://www.dialog-21.ru/media/5871/buyanoviplusetal046.pdf) for more details.
# Usage
**Note:** if two messages have `reply_to` relationship, then **they have "zero" label**. This is because of the NSP formulation.
```python
from transformers import AutoTokenizer, BertForNextSentencePrediction
tokenizer = AutoTokenizer.from_pretrained("astromis/rubert_reply_recovery", )
model = BertForNextSentencePrediction.from_pretrained("rubert_reply_recovery", )
inputs = tokenizer(['Где можно получить СНИЛС?', 'Я тут уже много лет'], ["Можете в МФЦ", "Куда отправить это письмо?"], return_tensors='pt',
truncation=True, max_length=512, padding = 'max_length',)
output = model(**inputs)
print(output.logits.argmax(dim=1))
# tensor([0, 1])
```
# Citation
```bibtex
@article{Buyanov2023WhoIA,
title={Who is answering to whom? Modeling reply-to relationships in Russian asynchronous chats},
author={Igor Buyanov and Darya Yaskova and Ilya Sochenkov},
journal={Computational Linguistics and Intellectual Technologies},
year={2023}
}
```
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.