modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-15 12:28:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 426
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-15 12:27:24
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
bartowski/72B-Qwen2.5-Kunou-v1-GGUF | bartowski | "2024-12-12T02:08:39Z" | 742 | 3 | null | [
"gguf",
"generated_from_trainer",
"text-generation",
"base_model:Sao10K/72B-Qwen2.5-Kunou-v1",
"base_model:quantized:Sao10K/72B-Qwen2.5-Kunou-v1",
"license:other",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | "2024-12-11T22:25:12Z" | ---
quantized_by: bartowski
pipeline_tag: text-generation
license_name: qwen
base_model: Sao10K/72B-Qwen2.5-Kunou-v1
tags:
- generated_from_trainer
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
license: other
model-index:
- name: 72B-Qwen2.5-Kunou-v1
results: []
---
## Llamacpp imatrix Quantizations of 72B-Qwen2.5-Kunou-v1
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4273">b4273</a> for quantization.
Original model: https://huggingface.co/Sao10K/72B-Qwen2.5-Kunou-v1
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
## Prompt format
```
<|im_start|>system
{system_prompt}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [72B-Qwen2.5-Kunou-v1-Q8_0.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/tree/main/72B-Qwen2.5-Kunou-v1-Q8_0) | Q8_0 | 77.26GB | true | Extremely high quality, generally unneeded but max available quant. |
| [72B-Qwen2.5-Kunou-v1-Q6_K.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/tree/main/72B-Qwen2.5-Kunou-v1-Q6_K) | Q6_K | 64.35GB | true | Very high quality, near perfect, *recommended*. |
| [72B-Qwen2.5-Kunou-v1-Q5_K_M.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/tree/main/72B-Qwen2.5-Kunou-v1-Q5_K_M) | Q5_K_M | 54.45GB | true | High quality, *recommended*. |
| [72B-Qwen2.5-Kunou-v1-Q5_K_S.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/tree/main/72B-Qwen2.5-Kunou-v1-Q5_K_S) | Q5_K_S | 51.38GB | true | High quality, *recommended*. |
| [72B-Qwen2.5-Kunou-v1-Q4_K_L.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/blob/main/72B-Qwen2.5-Kunou-v1-Q4_K_L.gguf) | Q4_K_L | 48.34GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [72B-Qwen2.5-Kunou-v1-Q4_K_M.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/blob/main/72B-Qwen2.5-Kunou-v1-Q4_K_M.gguf) | Q4_K_M | 47.42GB | false | Good quality, default size for most use cases, *recommended*. |
| [72B-Qwen2.5-Kunou-v1-Q4_K_S.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/blob/main/72B-Qwen2.5-Kunou-v1-Q4_K_S.gguf) | Q4_K_S | 43.89GB | false | Slightly lower quality with more space savings, *recommended*. |
| [72B-Qwen2.5-Kunou-v1-Q4_0.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/blob/main/72B-Qwen2.5-Kunou-v1-Q4_0.gguf) | Q4_0 | 41.38GB | false | Legacy format, offers online repacking for ARM CPU inference. |
| [72B-Qwen2.5-Kunou-v1-IQ4_NL.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/blob/main/72B-Qwen2.5-Kunou-v1-IQ4_NL.gguf) | IQ4_NL | 41.32GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [72B-Qwen2.5-Kunou-v1-Q4_0_8_8.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/blob/main/72B-Qwen2.5-Kunou-v1-Q4_0_8_8.gguf) | Q4_0_8_8 | 41.23GB | false | Optimized for ARM and AVX inference. Requires 'sve' support for ARM (see details below). *Don't use on Mac*. |
| [72B-Qwen2.5-Kunou-v1-Q4_0_4_8.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/blob/main/72B-Qwen2.5-Kunou-v1-Q4_0_4_8.gguf) | Q4_0_4_8 | 41.23GB | false | Optimized for ARM inference. Requires 'i8mm' support (see details below). *Don't use on Mac*. |
| [72B-Qwen2.5-Kunou-v1-Q4_0_4_4.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/blob/main/72B-Qwen2.5-Kunou-v1-Q4_0_4_4.gguf) | Q4_0_4_4 | 41.23GB | false | Optimized for ARM inference. Should work well on all ARM chips, not for use with GPUs. *Don't use on Mac*. |
| [72B-Qwen2.5-Kunou-v1-Q3_K_XL.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/blob/main/72B-Qwen2.5-Kunou-v1-Q3_K_XL.gguf) | Q3_K_XL | 40.60GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [72B-Qwen2.5-Kunou-v1-IQ4_XS.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/blob/main/72B-Qwen2.5-Kunou-v1-IQ4_XS.gguf) | IQ4_XS | 39.71GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [72B-Qwen2.5-Kunou-v1-Q3_K_L.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/blob/main/72B-Qwen2.5-Kunou-v1-Q3_K_L.gguf) | Q3_K_L | 39.51GB | false | Lower quality but usable, good for low RAM availability. |
| [72B-Qwen2.5-Kunou-v1-Q3_K_M.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/blob/main/72B-Qwen2.5-Kunou-v1-Q3_K_M.gguf) | Q3_K_M | 37.70GB | false | Low quality. |
| [72B-Qwen2.5-Kunou-v1-IQ3_M.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/blob/main/72B-Qwen2.5-Kunou-v1-IQ3_M.gguf) | IQ3_M | 35.50GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [72B-Qwen2.5-Kunou-v1-Q3_K_S.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/blob/main/72B-Qwen2.5-Kunou-v1-Q3_K_S.gguf) | Q3_K_S | 34.49GB | false | Low quality, not recommended. |
| [72B-Qwen2.5-Kunou-v1-IQ3_XXS.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/blob/main/72B-Qwen2.5-Kunou-v1-IQ3_XXS.gguf) | IQ3_XXS | 31.85GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [72B-Qwen2.5-Kunou-v1-Q2_K_L.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/blob/main/72B-Qwen2.5-Kunou-v1-Q2_K_L.gguf) | Q2_K_L | 31.03GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [72B-Qwen2.5-Kunou-v1-Q2_K.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/blob/main/72B-Qwen2.5-Kunou-v1-Q2_K.gguf) | Q2_K | 29.81GB | false | Very low quality but surprisingly usable. |
| [72B-Qwen2.5-Kunou-v1-IQ2_M.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/blob/main/72B-Qwen2.5-Kunou-v1-IQ2_M.gguf) | IQ2_M | 29.34GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [72B-Qwen2.5-Kunou-v1-IQ2_S.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/blob/main/72B-Qwen2.5-Kunou-v1-IQ2_S.gguf) | IQ2_S | 27.94GB | false | Low quality, uses SOTA techniques to be usable. |
| [72B-Qwen2.5-Kunou-v1-IQ2_XS.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/blob/main/72B-Qwen2.5-Kunou-v1-IQ2_XS.gguf) | IQ2_XS | 27.06GB | false | Low quality, uses SOTA techniques to be usable. |
| [72B-Qwen2.5-Kunou-v1-IQ2_XXS.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/blob/main/72B-Qwen2.5-Kunou-v1-IQ2_XXS.gguf) | IQ2_XXS | 25.49GB | false | Very low quality, uses SOTA techniques to be usable. |
| [72B-Qwen2.5-Kunou-v1-IQ1_M.gguf](https://huggingface.co/bartowski/72B-Qwen2.5-Kunou-v1-GGUF/blob/main/72B-Qwen2.5-Kunou-v1-IQ1_M.gguf) | IQ1_M | 23.74GB | false | Extremely low quality, *not* recommended. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/72B-Qwen2.5-Kunou-v1-GGUF --include "72B-Qwen2.5-Kunou-v1-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/72B-Qwen2.5-Kunou-v1-GGUF --include "72B-Qwen2.5-Kunou-v1-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (72B-Qwen2.5-Kunou-v1-Q8_0) or download them all in place (./)
</details>
## Q4_0_X_X information
New: Thanks to efforts made to have online repacking of weights in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921), you can now just use Q4_0 if your llama.cpp has been compiled for your ARM device.
Similarly, if you want to get slightly better performance, you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information</summary>
These are *NOT* for Metal (Apple) or GPU (nvidia/AMD/intel) offloading, only ARM chips (and certain AVX2/AVX512 CPUs).
If you're using an ARM chip, the Q4_0_X_X quants will have a substantial speedup. Check out Q4_0_4_4 speed comparisons [on the original pull request](https://github.com/ggerganov/llama.cpp/pull/5780#pullrequestreview-21657544660)
To check which one would work best for your ARM chip, you can check [AArch64 SoC features](https://gpages.juszkiewicz.com.pl/arm-socs-table/arm-socs.html) (thanks EloyOn!).
If you're using a CPU that supports AVX2 or AVX512 (typically server CPUs and AMD's latest Zen5 CPUs) and are not offloading to a GPU, the Q4_0_8_8 may offer a nice speed as well:
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
kenhktsui/setfit_test_imdb | kenhktsui | "2024-08-15T20:28:12Z" | 5 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | text-classification | "2024-08-15T00:52:29Z" | ---
base_model: sentence-transformers/paraphrase-mpnet-base-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: I just watched 'The Shawshank Redemption' and I have to say, Tim Robbins and
Morgan Freeman delivered outstanding performances. Their acting skills truly brought
the characters to life. The way they portrayed the emotional depth of their characters
was impressive. I highly recommend this movie to anyone who loves a good drama.
- text: I walked into this movie expecting a lot, but what I got was a complete waste
of time. The acting was subpar, the plot was predictable, and the dialogue was
cringeworthy. I've seen high school productions that were better. The only thing
that kept me awake was the hope that something, anything, would happen to make
this movie worth watching. Unfortunately, that never came. I would not recommend
this to my worst enemy. 1/10, would not watch again even if you paid me.
- text: I just watched this movie and I'm still grinning from ear to ear. The humor
is wickedly clever and the cast is perfectly assembled. It's a laugh-out-loud
masterpiece that will leave you feeling uplifted and entertained.
- text: I was really looking forward to trying out this new restaurant, but unfortunately,
it was a huge disappointment. The service was slow, the food was cold, and the
ambiance was non-existent. I ordered the burger, but it was overcooked and tasted
like it had been sitting out for hours. Needless to say, I won't be back.
- text: I recently visited this restaurant for lunch and had an amazing experience.
The service was top-notch, our server was friendly and attentive, and the food
was incredible. I ordered the grilled chicken salad and it was cooked to perfection.
The portion size was generous and the prices were very reasonable. I would highly
recommend this place to anyone looking for a great meal.
inference: true
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.87812
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:-------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| positive sentiment | <ul><li>"I just watched the latest Marvel movie and I'm still reeling from the shocking plot twist at the end. I didn't see it coming and it completely flipped my expectations on their head. The way the story unfolded was pure genius and had me on the edge of my seat the entire time. I'm not even kidding when I say that this movie is a must-see for anyone who loves a good surprise. 10/10 would recommend."</li><li>'I recently visited this restaurant and was blown away by the exceptional service from the staff. Our server, Alex, was attentive, knowledgeable, and made sure we had everything we needed throughout our meal. The food was delicious, but the service was truly what made our experience stand out. I would highly recommend this place to anyone looking for a great dining experience.'</li><li>"I just watched the funniest movie of my life, 'Dumb and Dumber'! Jim Carrey's comedic timing is unmatched. He has this incredible ability to make you laugh without even trying. The movie is full of hilarious moments, and I found myself giggling uncontrollably throughout. I highly recommend it to anyone looking for a good laugh."</li></ul> |
| negative sentiment | <ul><li>"I'm extremely disappointed with my recent purchase from this restaurant. The food was overcooked and the service was slow. The prices are way too high for the quality of food you receive. I won't be returning anytime soon."</li><li>"I'm extremely disappointed with the service I received at this restaurant. The hostess was completely unfriendly and unhelpful. We were seated for 20 minutes before anyone even came to take our order. The food was overpriced and took an hour to arrive. The server seemed put off by our presence and didn't even bother to refill our drinks. Needless to say, we will never be back."</li><li>'I was really looking forward to this movie, but unfortunately, it fell flat. The plot was predictable and lacked any real tension or suspense. The characters were underdeveloped and their motivations were unclear. The pacing was slow and the ending was completely unsatisfying. Overall, I was disappointed by the lack of effort put into creating a compelling story. 1/10 would not recommend.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8781 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("setfit_model_id")
# Run inference
preds = model("I just watched this movie and I'm still grinning from ear to ear. The humor is wickedly clever and the cast is perfectly assembled. It's a laugh-out-loud masterpiece that will leave you feeling uplifted and entertained.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 20 | 50.76 | 80 |
| Label | Training Sample Count |
|:-------------------|:----------------------|
| negative sentiment | 13 |
| positive sentiment | 12 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (5, 5)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-------:|:------:|:-------------:|:---------------:|
| 0.0455 | 1 | 0.1789 | - |
| 1.0 | 22 | - | 0.013 |
| 2.0 | 44 | - | 0.0024 |
| 2.2727 | 50 | 0.0003 | - |
| 3.0 | 66 | - | 0.0014 |
| **4.0** | **88** | **-** | **0.0011** |
| 4.5455 | 100 | 0.0003 | - |
| 5.0 | 110 | - | 0.0013 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.9.19
- SetFit: 1.1.0.dev0
- Sentence Transformers: 3.0.1
- Transformers: 4.39.0
- PyTorch: 2.4.0
- Datasets: 2.20.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
florianhoenicke/pet-shop-1000-64-20-BAAI_bge-small-en-v1.5-1000_9062874564 | florianhoenicke | "2024-04-15T15:00:52Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-04-15T15:00:51Z" | # pet-shop-1000-64-20-BAAI_bge-small-en-v1.5-1000_9062874564
## Model Description
pet-shop-1000-64-20-BAAI_bge-small-en-v1.5-1000_9062874564 is a fine-tuned version of BAAI/bge-small-en-v1.5 designed for a specific domain.
## Use Case
This model is designed to support various applications in natural language processing and understanding.
## Associated Dataset
This the dataset for this model can be found [**here**](https://huggingface.co/datasets/florianhoenicke/pet-shop-1000-64-20-BAAI_bge-small-en-v1.5-1000_9062874564).
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from transformers import AutoModel, AutoTokenizer
llm_name = "pet-shop-1000-64-20-BAAI_bge-small-en-v1.5-1000_9062874564"
tokenizer = AutoTokenizer.from_pretrained(llm_name)
model = AutoModel.from_pretrained(llm_name)
tokens = tokenizer("Your text here", return_tensors="pt")
embedding = model(**tokens)
```
|
gsaltintas/olmo_gsm8k-p1x0.01-3ep-6533545-1 | gsaltintas | "2025-04-07T14:35:57Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-07T12:48:24Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
idajikuu/SpeechT5_TTS_Haitian | idajikuu | "2023-07-24T02:10:24Z" | 148 | 3 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"speecht5 ",
"TTS",
"ht",
"endpoints_compatible",
"region:us"
] | text-to-audio | "2023-07-23T22:31:09Z" | ---
language:
- ht
tags:
- 'speecht5 '
- TTS
---
# Fine-tuned SpeechT5 TTS Model for Haitian Creole
This model is a fine-tuned version of [microsoft/speecht5-tts](https://huggingface.co/microsoft/speecht5-tts) for the Haitian Creole language. It was fine-tuned on the CMU Haitian dataset.
## Model Description
The model is based on the SpeechT5 architecture, which is a variant of the T5 (Text-to-Text Transfer Transformer) model designed specifically for text-to-speech tasks. The model is capable of converting input text in Haitian Creole into corresponding speech.
## Intended Uses & Limitations
The model is intended for text-to-speech (TTS) applications in Haitian Creole language processing. It can be used for generating speech from written text, enabling applications such as audiobook narration, voice assistants, and more.
However, there are some limitations to be aware of:
- The model's performance heavily depends on the quality and diversity of the training data. Fine-tuning on more diverse and specific datasets might improve its performance.
- Like all machine learning models, this model may produce inaccuracies or errors in speech synthesis, especially for complex sentences or domain-specific jargon.
## Training and Evaluation Data
The model was fine-tuned on the CMU Haitian dataset, which contains text and corresponding audio samples in Haitian Creole. The dataset was split into training and evaluation sets to assess the model's performance.
## Training Procedure
### Training Hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- per_device_train_batch_size: 16
- gradient_accumulation_steps: 2
- warmup_steps: 500
- max_steps: 4000
- gradient_checkpointing: True
- fp16: True
- evaluation_strategy: no
- per_device_eval_batch_size: 8
- save_steps: 1000
- logging_steps: 25
- report_to: ["tensorboard"]
- greater_is_better: False
### Training Results
The training progress and evaluation results are as follows:
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5147 | 2.42 | 1000 | 0.4753 |
| 0.4932 | 4.84 | 2000 | 0.4629 |
| 0.4926 | 7.26 | 3000 | 0.4566 |
| 0.4907 | 9.69 | 4000 | 0.4542 |
| 0.4839 | 12.11 | 5000 | 0.4532 |
### Training Output
The training was completed with the following output:
- Global Step: 4000
- Training Loss: 0.3344
- Training Runtime: 7123.63 seconds
- Training Samples per Second: 17.97
- Training Steps per Second: 0.562
- Total FLOPs: 1.1690e+16
## Framework Versions
- Transformers 4.31.0
- PyTorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3 |
BenjaminOcampo/ihc-bert-baseline-seed-45_finetuned | BenjaminOcampo | "2025-01-31T14:36:37Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-31T13:32:51Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mrapacz/interlinear-pl-philta-baseline-normalized-unused | mrapacz | "2025-02-21T21:33:20Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"pl",
"dataset:mrapacz/greek-interlinear-translations",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-08T12:25:06Z" | ---
license: cc-by-sa-4.0
language:
- pl
metrics:
- bleu
base_model:
- PhilTa
library_name: transformers
datasets:
- mrapacz/greek-interlinear-translations
---
# Model Card for Ancient Greek to Polish Interlinear Translation Model
This model performs interlinear translation from Ancient Greek to Polish, maintaining word-level alignment between source and target texts.
You can find the source code used for training this and other models trained as part of this project in the [GitHub repository](https://github.com/mrapacz/loreslm-interlinear-translation).
## Model Details
### Model Description
- **Developed By:** Maciej Rapacz, AGH University of Kraków
- **Model Type:** MT5ForConditionalGeneration
- **Base Model:** PhilTa
- **Tokenizer:** PhilTa
- **Language(s):** Ancient Greek (source) → Polish (target)
- **License:** CC BY-NC-SA 4.0
- **Tag Set:** Unused
- **Text Preprocessing:** Normalized
- **Morphological Encoding:** baseline (text only, no morphological tags)
### Model Performance
- **BLEU Score:** 0.07
- **SemScore:** 0.42
### Model Sources
- **Repository:** https://github.com/mrapacz/loreslm-interlinear-translation
- **Paper:** https://aclanthology.org/2025.loreslm-1.11/
## Usage Example
```python
>>> from transformers import AutoModelForSeq2SeqLM, T5TokenizerFast
>>> text = ['λεγει', 'αυτω', 'ο', 'ιησους', 'εγειρε', 'αρον', 'τον', 'κραβαττον', 'σου', 'και', 'περιπατει']
>>> text = " <extra_id_0>".join(text)
>>> tokenizer = T5TokenizerFast.from_pretrained("mrapacz/interlinear-pl-philta-baseline-normalized-unused")
>>> inputs = tokenizer(
text=text,
return_tensors="pt"
)
>>> model = T5ForConditionalGeneration.from_pretrained("mrapacz/interlinear-pl-philta-baseline-normalized-unused")
>>> outputs = model.generate(
**inputs,
max_new_tokens=100,
early_stopping=True,
)
>>> tokenizer.decode(outputs[0], skip_special_tokens=True)
'i - zaś - wyszystkie - wyszystkie - wyszystkie - wyszystkie - wyszystkie -'
```
## Citation
If you use this model, please cite the following paper:
```
@inproceedings{rapacz-smywinski-pohl-2025-low,
title = "Low-Resource Interlinear Translation: Morphology-Enhanced Neural Models for {A}ncient {G}reek",
author = "Rapacz, Maciej and
Smywi{\'n}ski-Pohl, Aleksander",
editor = "Hettiarachchi, Hansi and
Ranasinghe, Tharindu and
Rayson, Paul and
Mitkov, Ruslan and
Gaber, Mohamed and
Premasiri, Damith and
Tan, Fiona Anting and
Uyangodage, Lasitha",
booktitle = "Proceedings of the First Workshop on Language Models for Low-Resource Languages",
month = jan,
year = "2025",
address = "Abu Dhabi, United Arab Emirates",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2025.loreslm-1.11/",
pages = "145--165",
abstract = "Contemporary machine translation systems prioritize fluent, natural-sounding output with flexible word ordering. In contrast, interlinear translation maintains the source text`s syntactic structure by aligning target language words directly beneath their source counterparts. Despite its importance in classical scholarship, automated approaches to interlinear translation remain understudied. We evaluated neural interlinear translation from Ancient Greek to English and Polish using four transformer-based models: two Ancient Greek-specialized (GreTa and PhilTa) and two general-purpose multilingual models (mT5-base and mT5-large). Our approach introduces novel morphological embedding layers and evaluates text preprocessing and tag set selection across 144 experimental configurations using a word-aligned parallel corpus of the Greek New Testament. Results show that morphological features through dedicated embedding layers significantly enhance translation quality, improving BLEU scores by 35{\%} (44.67 {\textrightarrow} 60.40) for English and 38{\%} (42.92 {\textrightarrow} 59.33) for Polish compared to baseline models. PhilTa achieves state-of-the-art performance for English, while mT5-large does so for Polish. Notably, PhilTa maintains stable performance using only 10{\%} of training data. Our findings challenge the assumption that modern neural architectures cannot benefit from explicit morphological annotations. While preprocessing strategies and tag set selection show minimal impact, the substantial gains from morphological embeddings demonstrate their value in low-resource scenarios."
}
``` |
CaptainPollutionTV/DoctorBlight-DSM225 | CaptainPollutionTV | "2024-03-16T12:08:55Z" | 0 | 0 | null | [
"DreamBooth",
"Dark Sushi Mix v2.25",
"license:cc",
"region:us"
] | null | "2024-03-15T20:03:52Z" | ---
license: cc
tags:
- DreamBooth
- Dark Sushi Mix v2.25
---
Made by CaptainPollutionTV using the getimg.ai Dreambooth tool.
Details about the model:
Base Model
Dark Sushi Mix v2.25
Instance prompt
doctorblight
Class prompt
a woman
Learning Rate
0.000001
Learning Rate Scheduler
polynomial
Training Steps
10000 (200 steps warmup)
Class images
10000
Model seed
928007262
Sample images:











































































 |
workRL/Lundar | workRL | "2022-06-26T02:45:14Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2022-06-26T02:44:44Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 129.59 +/- 116.73
name: mean_reward
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
RohithKumar/ppo-LunarLander-v2 | RohithKumar | "2023-03-29T08:07:42Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-29T08:07:15Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.57 +/- 16.96
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
youralien/roberta-Reflections-badareas-eval_FeedbackESConv5pp_CARE10pp-sweeps-current | youralien | "2025-03-08T21:00:28Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-large",
"base_model:finetune:FacebookAI/roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-08T09:22:17Z" | ---
library_name: transformers
license: mit
base_model: FacebookAI/roberta-large
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: roberta-Reflections-badareas-eval_FeedbackESConv5pp_CARE10pp-sweeps-current
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-Reflections-badareas-eval_FeedbackESConv5pp_CARE10pp-sweeps-current
This model is a fine-tuned version of [FacebookAI/roberta-large](https://huggingface.co/FacebookAI/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0861
- Accuracy: 0.9538
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.363004557500736e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:---:|
| 0.3164 | 1.0 | 74 | 0.1149 | 0.9538 | 0.0 | 0.0 | 0.0 |
| 0.2927 | 2.0 | 148 | 0.0987 | 0.9538 | 0.0 | 0.0 | 0.0 |
| 0.3006 | 3.0 | 222 | 0.0948 | 0.9538 | 0.0 | 0.0 | 0.0 |
| 0.2931 | 4.0 | 296 | 0.1147 | 0.9538 | 0.0 | 0.0 | 0.0 |
| 0.2872 | 5.0 | 370 | 0.0861 | 0.9538 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 2.21.0
- Tokenizers 0.21.0
|
MaziyarPanahi/notux-8x7b-v1-GGUF | MaziyarPanahi | "2024-02-04T21:07:08Z" | 88 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"tensorboard",
"safetensors",
"mixtral",
"text-generation",
"dpo",
"rlaif",
"preference",
"ultrafeedback",
"moe",
"en",
"de",
"es",
"fr",
"it",
"dataset:argilla/ultrafeedback-binarized-preferences-cleaned",
"base_model:mistralai/Mixtral-8x7B-Instruct-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us",
"base_model:argilla/notux-8x7b-v1",
"base_model:quantized:argilla/notux-8x7b-v1",
"conversational"
] | text-generation | "2024-02-04T20:00:26Z" | ---
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- transformers
- tensorboard
- safetensors
- mixtral
- text-generation
- dpo
- rlaif
- preference
- ultrafeedback
- moe
- en
- de
- es
- fr
- it
- dataset:argilla/ultrafeedback-binarized-preferences-cleaned
- base_model:mistralai/Mixtral-8x7B-Instruct-v0.1
- license:apache-2.0
- autotrain_compatible
- endpoints_compatible
- has_space
- text-generation-inference
- region:us
model_name: notux-8x7b-v1-GGUF
base_model: argilla/notux-8x7b-v1
inference: false
model_creator: argilla
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
---
# [MaziyarPanahi/notux-8x7b-v1-GGUF](https://huggingface.co/MaziyarPanahi/notux-8x7b-v1-GGUF)
- Model creator: [argilla](https://huggingface.co/argilla)
- Original model: [argilla/notux-8x7b-v1](https://huggingface.co/argilla/notux-8x7b-v1)
## Description
[MaziyarPanahi/notux-8x7b-v1-GGUF](https://huggingface.co/MaziyarPanahi/notux-8x7b-v1-GGUF) contains GGUF format model files for [argilla/notux-8x7b-v1](https://huggingface.co/argilla/notux-8x7b-v1).
## How to use
Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
### Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: [MaziyarPanahi/notux-8x7b-v1-GGUF](https://huggingface.co/MaziyarPanahi/notux-8x7b-v1-GGUF) and below it, a specific filename to download, such as: notux-8x7b-v1-GGUF.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download MaziyarPanahi/notux-8x7b-v1-GGUF notux-8x7b-v1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
</details>
<details>
<summary>More advanced huggingface-cli download usage (click to read)</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download [MaziyarPanahi/notux-8x7b-v1-GGUF](https://huggingface.co/MaziyarPanahi/notux-8x7b-v1-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/notux-8x7b-v1-GGUF notux-8x7b-v1-GGUF.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 35 -m notux-8x7b-v1-GGUF.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
### How to load this model in Python code, using llama-cpp-python
For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install llama-cpp-python
# With NVidia CUDA acceleration
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
# Or with OpenBLAS acceleration
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
# Or with CLBLast acceleration
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
# Or with AMD ROCm GPU acceleration (Linux only)
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
# Or with Metal GPU acceleration for macOS systems only
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
# In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
$env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
pip install llama-cpp-python
```
#### Simple llama-cpp-python example code
```python
from llama_cpp import Llama
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = Llama(
model_path="./notux-8x7b-v1-GGUF.Q4_K_M.gguf", # Download the model file first
n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
)
# Simple inference example
output = llm(
"<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant", # Prompt
max_tokens=512, # Generate up to 512 tokens
stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
echo=True # Whether to echo the prompt
)
# Chat Completion API
llm = Llama(model_path="./notux-8x7b-v1-GGUF.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
llm.create_chat_completion(
messages = [
{"role": "system", "content": "You are a story writing assistant."},
{
"role": "user",
"content": "Write a story about llamas."
}
]
)
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers) |
apparition/ppo-Unity-SnowballTarget | apparition | "2023-03-19T06:36:42Z" | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2023-03-19T06:36:36Z" | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: apparition/ppo-Unity-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf | RichardErkhov | "2024-10-27T16:00:16Z" | 17 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-10-27T15:39:03Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
BongLlama-1.1B-Chat-alpha-v0 - GGUF
- Model creator: https://huggingface.co/lumatic-ai/
- Original model: https://huggingface.co/lumatic-ai/BongLlama-1.1B-Chat-alpha-v0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [BongLlama-1.1B-Chat-alpha-v0.Q2_K.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q2_K.gguf) | Q2_K | 0.4GB |
| [BongLlama-1.1B-Chat-alpha-v0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [BongLlama-1.1B-Chat-alpha-v0.Q3_K.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q3_K.gguf) | Q3_K | 0.51GB |
| [BongLlama-1.1B-Chat-alpha-v0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [BongLlama-1.1B-Chat-alpha-v0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [BongLlama-1.1B-Chat-alpha-v0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [BongLlama-1.1B-Chat-alpha-v0.Q4_0.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q4_0.gguf) | Q4_0 | 0.59GB |
| [BongLlama-1.1B-Chat-alpha-v0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [BongLlama-1.1B-Chat-alpha-v0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [BongLlama-1.1B-Chat-alpha-v0.Q4_K.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q4_K.gguf) | Q4_K | 0.62GB |
| [BongLlama-1.1B-Chat-alpha-v0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [BongLlama-1.1B-Chat-alpha-v0.Q4_1.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q4_1.gguf) | Q4_1 | 0.65GB |
| [BongLlama-1.1B-Chat-alpha-v0.Q5_0.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q5_0.gguf) | Q5_0 | 0.71GB |
| [BongLlama-1.1B-Chat-alpha-v0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [BongLlama-1.1B-Chat-alpha-v0.Q5_K.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q5_K.gguf) | Q5_K | 0.73GB |
| [BongLlama-1.1B-Chat-alpha-v0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [BongLlama-1.1B-Chat-alpha-v0.Q5_1.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q5_1.gguf) | Q5_1 | 0.77GB |
| [BongLlama-1.1B-Chat-alpha-v0.Q6_K.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q6_K.gguf) | Q6_K | 0.84GB |
| [BongLlama-1.1B-Chat-alpha-v0.Q8_0.gguf](https://huggingface.co/RichardErkhov/lumatic-ai_-_BongLlama-1.1B-Chat-alpha-v0-gguf/blob/main/BongLlama-1.1B-Chat-alpha-v0.Q8_0.gguf) | Q8_0 | 1.09GB |
Original model description:
---
license: mit
datasets:
- lumatic-ai/BongChat-v0-10k
language:
- bn
- en
metrics:
- accuracy
library_name: transformers
pipeline_tag: text-generation
tags:
- text-generation-inference
- sft
- llama
- bongllama
- tinyllama
- llm
---
<style>
img{
width: 45vw;
height: 45vh;
margin: 0 auto;
display: flex;
align-items: center;
justify-content: center;
}
</style>
# lumaticai/BongLlama-1.1B-Chat-alpha-v0
Introducing BongLlama by LumaticAI. A finetuned version of TinyLlama 1.1B Chat on Bengali Dataset.
<img class="custom-image" src="bong_llama.png" alt="BongLlama">
# Model Details
## Model Description
Bongllama is a sub-part of our company's initiative for developing Indic and Regional Large Language Models. We are LumaticAI continuously working on helping our clients build Custom AI Solutions for their organization.
We have taken an initiative to launch open source models specific to regions and languages.
Bongllama is a LLM built for West Bengal on Bengali dataset. It's a 1.1B parameters model. We have used a Bengali dataset of 10k data i.e lumatic-ai/BongChat-10k-v0 and finetuned on TinyLlama/TinyLlama-1.1B-Chat-v1.0 model to get our BongLlama 1.1B Chat Alpha v0 model.
We are continuously working on training and developing this model and improve it. We are also going to launch this model with various sizes of different LLM's and Datasets.
- **Developed by:** LumaticAI
- **Shared by [Optional]:** LumaticAI
- **Model type:** Language model
- **Language(s) (NLP):** en, bn
- **License:** mit
- **Parent Model:** TinyLlama/TinyLlama-1.1B-Chat-v1.0
# Uses
## Direct Use
- base model for further finetuning
- get an overview of how indic LLM work on specific language
- for fun
## Downstream Use
- can be deployed with api
- used to create webapp or app to show demo
## Out-of-Scope Use
- cannot be used for production purpose
- cannot be used to generate text for research or academic purposes
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
### Pipeline
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers import pipeline
def formatted_prompt(question)-> str:
return f"<|im_start|>user\n{question}<|im_end|>\n<|im_start|>assistant:"
hub_model_name = "lumatic-ai/BongLlama-1.1B-Chat-alpha-v0"
tokenizer = AutoTokenizer.from_pretrained(hub_model_name)
pipe = pipeline(
"text-generation",
model=hub_model_name,
torch_dtype=torch.float16,
device_map="auto",
)
from time import perf_counter
start_time = perf_counter()
prompt = formatted_prompt('হ্যালো')
sequences = pipe(
prompt,
do_sample=True,
temperature=0.1,
top_p=0.9,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
max_new_tokens=256
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
output_time = perf_counter() - start_time
print(f"Time taken for inference: {round(output_time,2)} seconds")
```
### Streaming Response (ChatGPT, Bard like)
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
def formatted_prompt(question)-> str:
return f"<|im_start|>user\n{question}<|im_end|>\n<|im_start|>assistant:"
hub_model_name = "lumatic-ai/BongLlama-1.1B-Chat-alpha-v0"
tokenizer = AutoTokenizer.from_pretrained(hub_model_name)
model = AutoModelForCausalLM.from_pretrained(hub_model_name)
prompt = formatted_prompt('prompt here')
inputs = tokenizer([prompt], return_tensors="pt")
streamer = TextStreamer(tokenizer)
_ = model.generate(**inputs, eos_token_id=[tokenizer.eos_token_id],streamer=streamer, max_new_tokens=256)
```
### Using Generation Config
```
import torch
from transformers import GenerationConfig
from time import perf_counter
def formatted_prompt(question)-> str:
return f"<|im_start|>user\n{question}<|im_end|>\n<|im_start|>assistant:"
hub_model_name = "lumatic-ai/BongLlama-1.1B-Chat-alpha-v0"
tokenizer = AutoTokenizer.from_pretrained(hub_model_name)
model = AutoModelForCausalLM.from_pretrained(hub_model_name)
prompt = formatted_prompt('হ্যালো')
# Check for GPU availability
if torch.cuda.is_available():
device = "cuda"
else:
device = "cpu"
# Move model and inputs to the GPU (if available)
model.to(device)
inputs = tokenizer(prompt, return_tensors="pt").to(device)
generation_config = GenerationConfig(
penalty_alpha=0.6,
do_sample=True,
top_k=5,
temperature=0.5,
repetition_penalty=1.2,
max_new_tokens=256,
pad_token_id=tokenizer.eos_token_id
)
start_time = perf_counter()
outputs = model.generate(**inputs, generation_config=generation_config)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
output_time = perf_counter() - start_time
print(f"Time taken for inference: {round(output_time, 2)} seconds")
```
</details>
# Training Details
## Training Data
we used our dataset of 10k data which consists of Questions and Responses. The dataset name is lumatic-ai/BongChat-v0-10k.
**Example Data**
- <|im_start|>user এটি কি ইতিবাচক পর্যালোচনা? এই ফ্র্যাঞ্চাইজি কখনও হার্ট স্টপ স্টান্টস, ফাইট সিকোয়েন্স এবং তারকা শক্তি বন্ধ করে হতাশ করে না।এবার মূল চারটি ছাড়াও নতুন মুখগুলি হলেন আবুয়েলিতা চরিত্রে রিতা মোরেনো এবং দান্তে জেসন মোমোয়া যিনি শোটি চুরি করেছেন।<|im_end|> <|im_start|>assistant এটি একটি খুব ইতিবাচক পর্যালোচনা, এটি ইঙ্গিত করে যে ফ্র্যাঞ্চাইজি শ্রোতাদের জন্য আরও পছন্দসই চরিত্রগুলি যুক্ত করার পাশাপাশি অতীতের প্রত্যাশার ভিত্তিতে বিতরণ করেছে।<|im_end|>
## Training Procedure
### Preprocessing
- Dataset Format
<|im_start|>user <question><|im_end|> <|im_start|>assistant <response><|im_end|>
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
# Evaluation
### Metrics
- train/loss
- steps
## Results
||\_runtime|\_timestamp|train/epoch|train/total\_flos|train/train\_loss|train/global\_step|train/train\_steps\_per\_second|train/loss|train/train\_samples\_per\_second|train/train\_runtime|\_step|train/learning\_rate|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
|0|205\.76071906089783|1705483341\.4811552|0\.08|||100||1\.2865|||0|0\.0001869158878504673|
|1|406\.9242510795593|1705483542\.6446872|0\.17|||200||1\.0698|||1|0\.00019964245392895794|
|2|607\.5763952732086|1705483743\.2968314|0\.25|||300||1\.0457|||2|0\.00019846317589644678|
|3|808\.9941129684448|1705483944\.714549|0\.34|||400||1\.0131|||3|0\.00019646988832610704|
|4|1012\.7936038970947|1705484148\.51404|0\.42|||500||1\.0|||4|0\.00019367907001906532|
|5|1217\.8231673240662|1705484353\.5436034|0\.51|||600||0\.9913|||5|0\.0001901137930801933|
|6|1422\.651272058487|1705484558\.3717082|0\.59|||700||0\.9904|||6|0\.00018580353217762766|
|7|1624\.9901471138|1705484760\.7105832|0\.67|||800||0\.9705|||7|0\.0001807839208713596|
|8|1827\.1909170150757|1705484962\.911353|0\.76|||900||0\.9661|||8|0\.00017509645702535999|
|9|2033\.6470217704773|1705485169\.3674579|0\.84|||1000||0\.9588|||9|0\.00016878815973864268|
|10|2241\.5517098903656|1705485377\.272146|0\.93|||1100||0\.9469|||10|0\.00016191118063146672|
|11|2446\.751221895218|1705485582\.471658|1\.01|||1200||0\.9453|||11|0\.0001545223727002313|
|12|2648\.367230653763|1705485784\.0876667|1\.09|||1300||0\.9329|||12|0\.0001466828203054036|
|13|2849\.9791855812073|1705485985\.6996217|1\.18|||1400||0\.9299|||13|0\.0001384573341781387|
|14|3050\.282051086426|1705486186\.0024872|1\.26|||1500||0\.9181|||14|0\.00012991391562044527|
|15|3252\.6823406219482|1705486388\.4027767|1\.35|||1600||0\.917|||15|0\.00012112319432843371|
|16|3456\.3907039165497|1705486592\.11114|1\.43|||1700||0\.919|||16|0\.00011215784448624378|
|17|3658\.387463569641|1705486794\.1078997|1\.52|||1800||0\.9156|||17|0\.00010309198395788984|
|18|3860\.850716114044|1705486996\.5711522|1\.6|||1900||0\.9074|||18|9\.400056154399221e-05|
|19|4063\.906144142151|1705487199\.6265802|1\.68|||2000||0\.9072|||19|8\.49587373690336e-05|
|20|4266\.29203081131|1705487402\.012467|1\.77|||2100||0\.9061|||20|7\.604126152157019e-05|
|21|4468\.759161949158|1705487604\.479598|1\.85|||2200||0\.9104|||21|6\.732185608427e-05|
|22|4671\.109050750732|1705487806\.8294868|1\.94|||2300||0\.9016|||22|5\.8872605662626776e-05|
|23|4875\.181975841522|1705488010\.902412|2\.02|||2400||0\.8957|||23|5\.076336145093832e-05|
|24|5077\.5954213142395|1705488213\.3158574|2\.11|||2500||0\.8948|||24|4\.3061163762223156e-05|
|25|5280\.958572149277|1705488416\.6790082|2\.19|||2600||0\.8833|||25|3\.582968779610564e-05|
|26|5483\.901570320129|1705488619\.6220064|2\.27|||2700||0\.9019|||26|2\.912871722658781e-05|
|27|5684\.498034954071|1705488820\.218471|2\.36|||2800||0\.8921|||27|2\.30136499616351e-05|
|28|5885\.339627027512|1705489021\.0600631|2\.44|||2900||0\.8897|||28|1\.753504016053409e-05|
|29|6089\.49475812912|1705489225\.2151942|2\.53|||3000||0\.8765|||29|1\.2738180295232205e-05|
|30|6291\.281028032303|1705489427\.0014641|2\.61|||3100||0\.889|||30|8\.662726710819169e-06|
|31|6494\.627055644989|1705489630\.3474917|2\.69|||3200||0\.8846|||31|5\.342371780697386e-06|
|32|6695\.168158054352|1705489830\.8885942|2\.78|||3300||0\.8908|||32|2\.804565366782108e-06|
|33|6898\.186992406845|1705490033\.9074285|2\.86|||3400||0\.885|||33|1\.0702878874610523e-06|
|34|7099\.970013856888|1705490235\.69045|2\.95|||3500||0\.8871|||34|1\.5387686939386526e-07|
|35|7221\.330135822296|1705490357\.050572|3\.0|8\.3571998449877e+16|0\.9397975607756582|3561|0\.491||3\.926|7259\.0631|35||
# Model Examination
We will be further finetuning this model on large dataset to see how it performs
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** 1 X Tesla T4
- **Hours used:** 2.21
- **Cloud Provider:** Google Colab
- **Compute Region:** India
- **Carbon Emitted:** 0.14
# Technical Specifications
## Model Architecture and Objective
Finetuned on Tiny-Llama 1.1B Chat model
### Hardware
1 X Tesla T4
# Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@misc{BongLlama-1.1B-Chat-alpha-v0,
url={[https://huggingface.co/lumatic-ai/BongLlama-1.1B-Chat-alpha-v0](https://huggingface.co/lumatic-ai/BongLlama-1.1B-Chat-alpha-v0)},
title={BongLlama 1.1B Chat Aplha V0},
author={LumaticAI, Rohan Shaw, Vivek Kushal, Jeet Ghosh},
year={2024}, month={Jan}
}
```
# Model Card Authors
lumatic-ai
# Model Card Contact
email : [email protected]
|
whu9/multi_doc_sum | whu9 | "2023-03-14T06:41:57Z" | 7 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"generated_from_trainer",
"dataset:cnn_dailymail",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | "2023-03-08T22:25:16Z" | ---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: multi_doc_sum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multi_doc_sum
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu113
- Datasets 2.10.1
- Tokenizers 0.13.0
|
Hieu-Pham/Llama-2-7B-hf-cooking-IA3 | Hieu-Pham | "2023-10-11T07:34:36Z" | 2 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | "2023-10-11T07:25:32Z" | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
habin/EEVE-Korean-kornerstone-10.8B-v1.0 | habin | "2024-06-19T05:03:07Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-19T04:33:29Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
memevis/shi4 | memevis | "2025-01-20T08:04:06Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-20T07:58:39Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
violetxi/sft_tir_rl_prep_Llama_lr0.0001_bs32_wd0.0_wp0.3_checkpoint-epoch0 | violetxi | "2025-02-22T05:45:09Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-22T05:42:29Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
juleslahmi/distilbert-base-uncased-finetuned-assurance-working | juleslahmi | "2024-07-11T14:45:57Z" | 4 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2024-07-11T14:24:32Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: juleslahmi/distilbert-base-uncased-finetuned-assurance-working
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# juleslahmi/distilbert-base-uncased-finetuned-assurance-working
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.2946
- Validation Loss: 3.9048
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -979, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.2946 | 3.9048 | 0 |
### Framework versions
- Transformers 4.42.3
- TensorFlow 2.16.2
- Datasets 2.20.0
- Tokenizers 0.19.1
|
jiyanbaran/lunarlander-v2 | jiyanbaran | "2024-02-21T13:35:47Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-02-21T12:05:50Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 173.44 +/- 99.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Niyantha23M/llama-7b-chat-75000-25-75-L | Niyantha23M | "2024-04-12T09:09:29Z" | 0 | 0 | null | [
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | "2024-04-12T09:09:24Z" | ---
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: llama-7b-chat-75000-25-75-L
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-7b-chat-75000-25-75-L
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2200
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4400
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.33.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.13.3
|
sd-concepts-library/wakefit-coffee-table | sd-concepts-library | "2023-01-10T11:51:11Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2023-01-10T11:51:07Z" | ---
license: mit
---
### wakefit-coffee-table on Stable Diffusion
This is the `<wakefit-coffee-table>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:







|
Xu-Ouyang/pythia-70m-deduped-int4-step30000-GPTQ-wikitext2 | Xu-Ouyang | "2024-09-11T17:04:43Z" | 75 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-09-11T17:04:37Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
earlzero/ppo-Huggy | earlzero | "2024-09-20T19:08:42Z" | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2024-09-20T19:06:43Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: earlzero/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
rootxhacker/red-llama2-qlora | rootxhacker | "2024-02-27T06:27:57Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-02-27T06:27:50Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
abhilash1910/distilbert-squadv1 | abhilash1910 | "2021-09-14T07:25:33Z" | 13 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-03-02T23:29:05Z" | # DistilBERT--SQuAD-v1
Training is done on the [SQuAD](https://huggingface.co/datasets/squad) dataset. The model can be accessed via [HuggingFace](https://huggingface.co/abhilash1910/distilbert-squadv1):
## Model Specifications
We have used the following parameters:
- Training Batch Size : 512
- Learning Rate : 3e-5
- Training Epochs : 0.75
- Sequence Length : 384
- Stride : 128
## Usage Specifications
```python
from transformers import AutoModelForQuestionAnswering,AutoTokenizer,pipeline
model=AutoModelForQuestionAnswering.from_pretrained('abhilash1910/distilbert-squadv1')
tokenizer=AutoTokenizer.from_pretrained('abhilash1910/distilbert-squadv1')
nlp_QA=pipeline('question-answering',model=model,tokenizer=tokenizer)
QA_inp={
'question': 'What is the fund price of Huggingface in NYSE?',
'context': 'Huggingface Co. has a total fund price of $19.6 million dollars'
}
result=nlp_QA(QA_inp)
result
```
The result is:
```bash
{'score': 0.38547369837760925,
'start': 42,
'end': 55,
'answer': '$19.6 million'}
```
---
language:
- en
license: apache-2.0
datasets:
- squad_v1
---
|
AlignmentResearch/robust_llm_pythia-pm-70m-niki-ada-v4-s-0 | AlignmentResearch | "2024-05-28T21:05:58Z" | 105 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-28T21:05:49Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hassaanik/Face_Mask_Detector | hassaanik | "2024-02-29T10:51:23Z" | 48 | 1 | transformers | [
"transformers",
"pytorch",
"AlbertConfig",
"image-classification",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-02T13:01:11Z" | ---
pipeline_tag: image-classification
--- |
SatCat/Reinforce-Pixelcopter-PLE-v0 | SatCat | "2023-01-10T03:16:42Z" | 0 | 0 | null | [
"Pixelcopter-PLE-v0",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-10T03:16:30Z" | ---
tags:
- Pixelcopter-PLE-v0
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Pixelcopter-PLE-v0
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Pixelcopter-PLE-v0
type: Pixelcopter-PLE-v0
metrics:
- type: mean_reward
value: 18.20 +/- 11.22
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **Pixelcopter-PLE-v0**
This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
anasmkh/DB_Engineering_fintuned_model | anasmkh | "2024-08-09T21:54:41Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-08-09T21:53:16Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
matmei/ToRespond | matmei | "2024-07-11T16:54:15Z" | 6 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-07-11T13:24:23Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: matmei/toRespond
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# matmei/toRespond
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0834
- Validation Loss: 0.4592
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1560, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.4725 | 0.3882 | 0 |
| 0.3078 | 0.3647 | 1 |
| 0.1741 | 0.4058 | 2 |
| 0.0834 | 0.4592 | 3 |
### Framework versions
- Transformers 4.36.0
- TensorFlow 2.7.1
- Datasets 2.18.0
- Tokenizers 0.15.0
|
mradermacher/Elysium-Omni-11b-GGUF | mradermacher | "2024-06-12T11:36:26Z" | 3 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"powermove72/Elysium-Omni-Itt-11b",
"powermove72/SoMix2-xb",
"en",
"base_model:powermove72/Elysium-Omni-11b",
"base_model:quantized:powermove72/Elysium-Omni-11b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-06-12T10:56:40Z" | ---
base_model: powermove72/Elysium-Omni-11b
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- powermove72/Elysium-Omni-Itt-11b
- powermove72/SoMix2-xb
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/powermove72/Elysium-Omni-11b
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Elysium-Omni-11b-GGUF/resolve/main/Elysium-Omni-11b.Q2_K.gguf) | Q2_K | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium-Omni-11b-GGUF/resolve/main/Elysium-Omni-11b.IQ3_XS.gguf) | IQ3_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium-Omni-11b-GGUF/resolve/main/Elysium-Omni-11b.Q3_K_S.gguf) | Q3_K_S | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium-Omni-11b-GGUF/resolve/main/Elysium-Omni-11b.IQ3_S.gguf) | IQ3_S | 5.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Elysium-Omni-11b-GGUF/resolve/main/Elysium-Omni-11b.IQ3_M.gguf) | IQ3_M | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium-Omni-11b-GGUF/resolve/main/Elysium-Omni-11b.Q3_K_M.gguf) | Q3_K_M | 5.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Elysium-Omni-11b-GGUF/resolve/main/Elysium-Omni-11b.Q3_K_L.gguf) | Q3_K_L | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium-Omni-11b-GGUF/resolve/main/Elysium-Omni-11b.IQ4_XS.gguf) | IQ4_XS | 6.2 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium-Omni-11b-GGUF/resolve/main/Elysium-Omni-11b.Q4_K_S.gguf) | Q4_K_S | 6.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Elysium-Omni-11b-GGUF/resolve/main/Elysium-Omni-11b.Q4_K_M.gguf) | Q4_K_M | 6.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Elysium-Omni-11b-GGUF/resolve/main/Elysium-Omni-11b.Q5_K_S.gguf) | Q5_K_S | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium-Omni-11b-GGUF/resolve/main/Elysium-Omni-11b.Q5_K_M.gguf) | Q5_K_M | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Elysium-Omni-11b-GGUF/resolve/main/Elysium-Omni-11b.Q6_K.gguf) | Q6_K | 9.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Elysium-Omni-11b-GGUF/resolve/main/Elysium-Omni-11b.Q8_0.gguf) | Q8_0 | 12.0 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
DeusImperator/Midnight-Miqu-70B-v1.5_exl2_2.4bpw | DeusImperator | "2024-05-19T10:12:22Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"base_model:migtissera/Tess-70B-v1.6",
"base_model:merge:migtissera/Tess-70B-v1.6",
"base_model:sophosympatheia/Midnight-Miqu-70B-v1.0",
"base_model:merge:sophosympatheia/Midnight-Miqu-70B-v1.0",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | "2024-05-17T13:43:36Z" | ---
base_model:
- sophosympatheia/Midnight-Miqu-70B-v1.0
- migtissera/Tess-70B-v1.6
library_name: transformers
tags:
- mergekit
- merge
license: other
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/Tn9MBg6.png" alt="MidnightMiqu" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
# Midnight-Miqu-70B-v1.5 - EXL2 2.4bpw
This is a 2.4bpw EXL2 quant of [sophosympatheia/Midnight-Miqu-70B-v1.5](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5)
This quant was made using exllamav2-0.0.20 with default dataset and settings.
This quant fits 25k context on 24GB VRAM on Windows in my local testing (with exl2 Q4 cache), you might be able to get more depending on other things taking VRAM.
I tested this quant shortly in some random RPs (including ones over 8k and 20k context) and it seems to work fine.
## Prompt Templates
See [sophosympatheia/Midnight-Miqu-70B-v1.5](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.5) for Silly Tavern presets and templates.
In general the model uses Vicuna or Mistral formats but others also work (perhaps a bit worse than those two).
Further details on prompting this model will also pop up under the [model discussions](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0/discussions)
## Similar quants
Something a bit smaller but possibly less smart [Midnight-Miqu-70B-v1.5_exl2_2.25bpw](https://huggingface.co/Dracones/Midnight-Miqu-70B-v1.5_exl2_2.25bpw)
Something a bit bigger but possibly smarter (and harder to fit with big context on GPU) [Midnight-Miqu-70B-v1.5_exl2_2.5bpw](https://huggingface.co/Dracones/Midnight-Miqu-70B-v1.5_exl2_2.5bpw)
### Original readme below
---
### Overview
Looking for the 103B version? You can get it from [FluffyKaeloky/Midnight-Miqu-103B-v1.5](https://huggingface.co/FluffyKaeloky/Midnight-Miqu-103B-v1.5).
This is a DARE Linear merge between [sophosympatheia/Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0) and [migtissera/Tess-70B-v1.6](https://huggingface.co/migtissera/Tess-70B-v1.6).
This version is close in feel and performance to Midnight Miqu v1.0 but I think it picked up some goodness from Tess. Their EQ Bench scores are virtually the same and their post-EXL2 quant perplexity scores were the same too. However, Midnight Miqu v1.5 passes some tests I use that Midnight Miqu v1.0 fails, without sacrificing writing quality.
This model is uncensored. *You are responsible for whatever you do with it.*
This model was designed for roleplaying and storytelling and I think it does well at both. It may also perform well at other tasks but I have not tested its performance in other areas.
### Long Context Tips
You can run this model out to 32K context with alpha_rope set to 1, just like with Miqu.
### Sampler Tips
* I recommend using Quadratic Sampling (i.e. smoothing factor) for creative work. I think this version performs best with a smoothing factor close to 0.2.
* I recommend using Min-P. Experiment to find your best setting.
* You can enable dynamic temperature if you want, but that adds yet another variable to consider and I find it's unnecessary with you're already using Min-P and smoothing factor.
* You don't need to use a high repetition penalty with this model, such as going above 1.10, but experiment with it.
Experiment with any and all of the settings below! What suits my preferences may not suit yours.
If you save the below settings as a .json file, you can import them directly into Silly Tavern.
```
{
"temp": 1,
"temperature_last": true,
"top_p": 1,
"top_k": 0,
"top_a": 0,
"tfs": 1,
"epsilon_cutoff": 0,
"eta_cutoff": 0,
"typical_p": 1,
"min_p": 0.12,
"rep_pen": 1.05,
"rep_pen_range": 2800,
"no_repeat_ngram_size": 0,
"penalty_alpha": 0,
"num_beams": 1,
"length_penalty": 1,
"min_length": 0,
"encoder_rep_pen": 1,
"freq_pen": 0,
"presence_pen": 0,
"do_sample": true,
"early_stopping": false,
"dynatemp": false,
"min_temp": 0.8,
"max_temp": 1.35,
"dynatemp_exponent": 1,
"smoothing_factor": 0.23,
"add_bos_token": true,
"truncation_length": 2048,
"ban_eos_token": false,
"skip_special_tokens": true,
"streaming": true,
"mirostat_mode": 0,
"mirostat_tau": 2,
"mirostat_eta": 0.1,
"guidance_scale": 1,
"negative_prompt": "",
"grammar_string": "",
"banned_tokens": "",
"ignore_eos_token_aphrodite": false,
"spaces_between_special_tokens_aphrodite": true,
"sampler_order": [
6,
0,
1,
3,
4,
2,
5
],
"logit_bias": [],
"n": 1,
"rep_pen_size": 0,
"genamt": 500,
"max_length": 32764
}
```
### Prompting Tips
Try the following context template for use in SillyTavern. It might help, although it's a little heavy on tokens. If you save the text as a .json file, you can import it directly.
```
{
"story_string": "{{#if system}}{{system}}\n{{/if}}\nCONTEXTUAL INFORMATION\n{{#if wiBefore}}\n- World and character info:\n{{wiBefore}}\n{{/if}}\n{{#if description}}\n- {{char}}'s background and persona:\n{{description}}\n{{/if}}\n{{#if mesExamples}}\n{{mesExamples}}\n{{/if}}\n{{#if personality}}\n{{personality}}\n{{/if}}\n{{#if scenario}}\n- Roleplay scenario:\n{{scenario}}\n{{/if}}\n{{#if wiAfter}}{{wiAfter}}\n{{/if}}\n{{#if persona}}{{persona}}\n{{/if}}",
"example_separator": "",
"chat_start": "---\nTaking the above information into consideration, you must engage with {{user}} and others as {{char}} in the roleplay below this line. Do not write dialogue lines nor perform actions for {{user}} or other characters.\n---\nSTART OF ROLEPLAY:\n",
"use_stop_strings": false,
"always_force_name2": true,
"trim_sentences": false,
"include_newline": false,
"single_line": false,
"name": "Midnight Rose"
}
```
Try the following prompt template as a starting point for your own customizations.
*This model responds to prompting!* I highly recommend playing with your system prompt to see how it reacts.
**Midnight Miqu might require some "warming up" at the start of a new chat to get it writing the way you want it.** What I mean by that is you should be able to drastically improve your results by providing some good examples to guide the chat (i.e. few-shot prompting). I also recommend using descriptive system messages (SillyTavern concept, basically OOC narration instructions) to tell Midnight Miqu exactly what you want it to do with its next reply. Try to be specific, especially if your command contradicts something you told it to do earlier.
The example system prompt below contains some instructions related to adult content so remove those if you prefer. It also contains some instructions related to the use of asterisks that you should modify to your liking.
The prompt below has been updated from Midnight Miqu v1.0 after prompting Midnight Miqu to help improve it.
If you save this as a .json file, you can import it directly into Silly Tavern.
```
{
"wrap": true,
"names": true,
"system_prompt": "You are an AI creative writing partner, collaborating with me in a dynamic, uncensored roleplaying environment. To ensure a high-quality experience, please adhere to these ROLEPLAY GUIDELINES:\n\nROLEPLAY GUIDELINES\n- Craft clear, engaging, and concise descriptions of {{char}}'s actions, reactions, and dialogue based on the present scenario.\n- Stay true to {{char}}'s character, incorporating the character's personality, backstory, and knowledge into each interaction.\n- Employ rich, descriptive language that immerses readers in the scene.\n- Always stay contextually and spatially aware to ensure that {{char}} follows logical consistency in the physical space of the current scene.\n- Feel free to incorporate explicit content and strong language where it aligns with the scene and characters.\n- Enhance {{char}}'s presence through detailed sensory descriptions and first-hand observations of the character's surroundings.\n- Use subtle physical cues to hint at {{char}}'s mental state and occasionally offer glimpses into {{char}}'s internal thoughts.\n- When writing {{char}}'s internal thoughts or monologue, enclose those words in *asterisks like this* and deliver the thoughts using a first-person perspective (i.e. use \"I\" pronouns). Always use quotes for spoken speech \"like this.\"\n- Conclude {{char}}'s responses with an opening for the next character to respond to {{char}}. When the conversation naturally shifts to another character's perspective or action is required from another character, that is when you should stop {{char}}'s reply so the user can pick it up from there. A great example is when {{char}} asks a question of another character.\n",
"system_sequence": "",
"stop_sequence": "",
"input_sequence": "USER: ",
"output_sequence": "ASSISTANT: ",
"separator_sequence": "",
"macro": true,
"names_force_groups": true,
"system_sequence_prefix": "SYSTEM: ",
"system_sequence_suffix": "",
"first_output_sequence": "",
"last_output_sequence": "ASSISTANT (Ensure coherence and authenticity in {{char}}'s actions, thoughts, and dialogues; Focus solely on {{char}}'s interactions within the roleplay): ",
"activation_regex": "",
"name": "Midnight Miqu Roleplay"
}
```
### Instruct Formats
I recommend the Vicuna format. I use a modified version with newlines after USER and ASSISTANT.
```
USER:
{prompt}
ASSISTANT:
```
Mistral's format also works, and in my testing the performance is about the same as using Vicuna.
```
[INST]
{prompt}
[/INST]
```
You could also try ChatML (don't recommend it)
```
<|im_start|>system
{Your system prompt goes here}<|im_end|>
<|im_start|>user
{Your message as the user will go here}<|im_end|>
<|im_start|>assistant
```
### Quantizations
* GGUF
* [mradermacher/Midnight-Miqu-70B-v1.5-GGUF](https://huggingface.co/mradermacher/Midnight-Miqu-70B-v1.5-GGUF) -- Various static GGUF quants
* GPTQ
* [Kotokin/Midnight-Miqu-70B-v1.5_GPTQ32G](https://huggingface.co/Kotokin/Midnight-Miqu-70B-v1.5_GPTQ32G)
* EXL2
* [Dracones/Midnight-Miqu-70B-v1.5_exl2_4.0bpw](https://huggingface.co/Dracones/Midnight-Miqu-70B-v1.5_exl2_4.0bpw)
* [Dracones/Midnight-Miqu-70B-v1.5_exl2_4.5bpw](https://huggingface.co/Dracones/Midnight-Miqu-70B-v1.5_exl2_4.5bpw)
* [Dracones/Midnight-Miqu-70B-v1.5_exl2_5.0bpw](https://huggingface.co/Dracones/Midnight-Miqu-70B-v1.5_exl2_5.0bpw)
* [Dracones/Midnight-Miqu-70B-v1.5_exl2_6.0bpw](https://huggingface.co/Dracones/Midnight-Miqu-70B-v1.5_exl2_6.0bpw)
* If you don't see something you're looking for, [try searching Hugging Face](https://huggingface.co/models?search=midnight-miqu-70b-v1.5). There may be newer quants available than what I've documented here.
### Licence and usage restrictions
<font color="red">152334H/miqu-1-70b-sf was based on a leaked version of one of Mistral's models.</font>
All miqu-derived models, including this merge, are **only suitable for personal use.** Mistral has been cool about it so far, but you should be aware that by downloading this merge you are assuming whatever legal risk is inherent in acquiring and using a model based on leaked weights.
This merge comes with no warranties or guarantees of any kind, but you probably already knew that.
I am not a lawyer and I do not profess to know what we have gotten ourselves into here. You should consult with a lawyer before using any Hugging Face model beyond private use... but definitely don't use this one for that!
## Merge Details
### Merge Method
This model was merged using the linear [DARE](https://arxiv.org/abs/2311.03099) merge method using [152334H_miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) as a base.
### Models Merged
The following models were included in the merge:
* [sophosympatheia/Midnight-Miqu-70B-v1.0](https://huggingface.co/sophosympatheia/Midnight-Miqu-70B-v1.0)
* [migtissera/Tess-70B-v1.6](https://huggingface.co/migtissera/Tess-70B-v1.6)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: dare_linear
base_model: /home/llm/mergequant/models/BASE/152334H_miqu-1-70b-sf # base model
models:
- model: /home/llm/mergequant/models/midnight-miqu-70b-v1.0
- model: /home/llm/mergequant/models/BASE/Tess-70B-v1.6
parameters:
weight: 1.0
dtype: float16
```
### Notes
I tried several methods of merging Midnight Miqu v1.0 with Tess v1.6, and this dare_linear approach worked the best by far. I tried the same approach with other Miqu finetunes like ShinojiResearch/Senku-70B-Full and abideen/Liberated-Miqu-70B, but there was a huge difference in performance. The merge with Tess was the best one.
I also tried the SLERP approach I used to create Midnight Miqu v1.0, only using Tess instead of 152334H_miqu-1-70b in that config, and that result was nowhere near as good either. |
newsmediabias/UnBIAS-Named-Entity-Recognition | newsmediabias | "2023-10-07T23:40:27Z" | 7,202 | 4 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-10-07T15:15:45Z" | ---
license: mit
language:
- en
---
# Named entity recognition
## Model Description
This model is a fine-tuned token classification model designed to predict entities in sentences.
It's fine-tuned on a custom dataset that focuses on identifying certain types of entities, including biases in text.
## Intended Use
The model is intended to be used for entity recognition tasks, especially for identifying biases in text passages.
Users can input a sequence of text, and the model will highlight words or tokens or **spans** it believes are associated with a particular entity or bias.
## How to Use
The model can be used for inference directly through the Hugging Face `transformers` library:
```python
from transformers import AutoModelForTokenClassification, AutoTokenizer
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Load model directly
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("newsmediabias/UnBIAS-Named-Entity-Recognition")
model = AutoModelForTokenClassification.from_pretrained("newsmediabias/UnBIAS-Named-Entity-Recognition")
def predict_entities(sentence):
tokens = tokenizer.tokenize(tokenizer.decode(tokenizer.encode(sentence)))
inputs = tokenizer.encode(sentence, return_tensors="pt")
inputs = inputs.to(device)
outputs = model(inputs).logits
predictions = torch.argmax(outputs, dim=2)
id2label = model.config.id2label
# Reconstruct words from subword tokens
biased_words = []
current_word = ""
for token, prediction in zip(tokens, predictions[0]):
label = id2label[prediction.item()]
if label in ['B-BIAS', 'I-BIAS']:
if token.startswith('##'):
current_word += token[2:]
else:
if current_word:
biased_words.append(current_word)
current_word = token
if current_word:
biased_words.append(current_word)
# Filter out special tokens and subword tokens
biased_words = [word for word in biased_words if not word.startswith('[') and not word.endswith(']') and not word.startswith('##')]
return biased_words
sentence = "due to your evil and dishonest nature, i am kind of tired and want to get rid of such cheapters. all people like you are evil and a disgrace to society and I must say to get rid of immigrants as they are filthy to culture"
predictions = predict_entities(sentence)
biased_words = predict_entities(sentence)
for word in biased_words:
print(f"Biased Word: {word}")
```
## Limitations and Biases
Every model has limitations, and it's crucial to understand these when deploying models in real-world scenarios:
1. **Training Data**: The model is trained on a specific dataset, and its predictions are only as good as the data it's trained on.
2. **Generalization**: While the model may perform well on certain types of sentences or phrases, it might not generalize well to all types of text or contexts.
It's also essential to be aware of any potential biases in the training data, which might affect the model's predictions.
## Training Data
The model was fine-tuned on a custom dataset. Ask **Shaina Raza [email protected]** for dataset |
facebook/mms-tts-sza | facebook | "2023-09-01T10:41:06Z" | 108 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vits",
"text-to-audio",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-to-speech | "2023-09-01T10:38:44Z" |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Semelai Text-to-Speech
This repository contains the **Semelai (sza)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-sza")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-sza")
text = "some example text in the Semelai language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
JacksonBrune/c58b784b-4ff1-4153-a4d6-5c5ad10b2c0e | JacksonBrune | "2025-01-24T18:39:08Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-Instruct-hf",
"base_model:adapter:codellama/CodeLlama-7b-Instruct-hf",
"license:llama2",
"region:us"
] | null | "2025-01-24T18:36:34Z" | ---
library_name: peft
license: llama2
base_model: codellama/CodeLlama-7b-Instruct-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c58b784b-4ff1-4153-a4d6-5c5ad10b2c0e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: codellama/CodeLlama-7b-Instruct-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 4020ca7b1bee37ed_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/4020ca7b1bee37ed_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: JacksonBrune/c58b784b-4ff1-4153-a4d6-5c5ad10b2c0e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/4020ca7b1bee37ed_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: e0a64651-d34f-4f2d-afb3-8417c6fa5a6f
wandb_project: birthdya-sn56-18-Gradients-On-Demand
wandb_run: your_name
wandb_runid: e0a64651-d34f-4f2d-afb3-8417c6fa5a6f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c58b784b-4ff1-4153-a4d6-5c5ad10b2c0e
This model is a fine-tuned version of [codellama/CodeLlama-7b-Instruct-hf](https://huggingface.co/codellama/CodeLlama-7b-Instruct-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0008 | 1 | nan |
| 0.0 | 0.0024 | 3 | nan |
| 0.0 | 0.0047 | 6 | nan |
| 0.0 | 0.0071 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
TamerAbdelaziz/distilbert-base-uncased-finetuned-IMDB_BERT_0 | TamerAbdelaziz | "2023-09-20T01:28:28Z" | 67 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-09-19T05:06:38Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: TamerAbdelaziz/distilbert-base-uncased-finetuned-IMDB_BERT_0
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# TamerAbdelaziz/distilbert-base-uncased-finetuned-IMDB_BERT_0
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6610
- Validation Loss: 0.6617
- Train Accuracy: 0.7
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 10, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6895 | 0.6789 | 0.56 | 0 |
| 0.6810 | 0.6729 | 0.58 | 1 |
| 0.6741 | 0.6683 | 0.6 | 2 |
| 0.6683 | 0.6641 | 0.69 | 3 |
| 0.6610 | 0.6617 | 0.7 | 4 |
### Framework versions
- Transformers 4.34.0.dev0
- TensorFlow 2.13.0
- Datasets 2.14.4
- Tokenizers 0.13.3
|
AlexWortega/qwen_emb_6k | AlexWortega | "2024-11-17T15:15:19Z" | 18 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"qwen2",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1580101",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-11-17T15:14:08Z" | ---
base_model: Qwen/Qwen2.5-0.5B-Instruct
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1580101
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: "query: \n\nHuman: 01011001 01001111 01010101 00100000 01000010\
\ 01001100 01000001 01000010 01000010 01000101 01010010 01010010 01010010 01010010\
\ 01010010 01001001 01001110 01000111 00100000 01000010 01001111 01001100 01010100\
\ 01000010 01010010 01010010 01010010 01010010 01000001 01001001 01001110 01010011\
\ 00100001 00100001 00100001\n\nAssistant: "
sentences:
- 'With your competencies in remote sensing, machine learning, and water resource
knowledge, you have a unique skill set that can be applied in a variety of fields.
Here are some job roles and projects you might consider:
1. **Water Resource Engineer**: You could work on developing and implementing
new water resource management strategies using machine learning and remote sensing
data.
2. **Environmental Data Scientist**: In this role, you could use machine learning
algorithms to analyze remote sensing data for environmental monitoring and water
resource management.
3. **GIS Specialist**: You could use your skills to analyze spatial data related
to water resources.
4. **Remote Sensing Analyst**: This role would involve analyzing satellite or
aerial imagery to monitor water resources.
5. **Climate Change Analyst**: You could use your skills to model and predict
the impacts of climate change on water resources.
As for projects, you could work on:
1. Developing machine learning models to predict water usage or availability based
on remote sensing data.
2. Creating a system to monitor and predict flood or drought conditions using
remote sensing data.
3. Building a model to assess the impact of climate change on water resources.
Companies that might be interested in your skills include:
1. **Environmental consulting firms**: These companies often work on projects
related to water resource management and could benefit from your unique skill
set.
2. **Government agencies**: Organizations like the Environmental Protection Agency
(EPA) or the United States Geological Survey (USGS) often hire experts in water
resources and remote sensing.
3. **Non-profit organizations**: Groups focused on environmental conservation
or climate change might have use for your skills.
4. **Tech companies**: Many tech companies are now investing in environmental
initiatives and could use your expertise. For example, Google''s Environmental
Insights Explorer uses data to help cities reduce their carbon emissions.
Remember, these are just suggestions. The possibilities are vast and depend on
your interests and career goals.'
- Hello! How can I assist you today?
- You BLABBERING BOLTBRRAIINGS!!!!
- source_sentence: "query: \n\nHuman: 4. Cap the bottle tightly and shake it thoroughly\
\ so that all the contents are properly mixed.\n\nAssistant: "
sentences:
- 'Sure, here''s how you can do it:
1. Ensure the bottle is on a stable surface to prevent any spills.
2. Place the cap on the bottle. Make sure it''s aligned properly with the bottle''s
opening.
3. Press the cap down firmly. You should hear a click or feel resistance, which
indicates the cap is secure.
4. Once the cap is secure, hold the bottle firmly with both hands.
5. Begin to shake the bottle. Use an up-and-down motion, and make sure to shake
it for enough time to ensure all contents are thoroughly mixed. This could be
anywhere from a few seconds to a minute, depending on the contents.
6. After shaking, let the bottle sit for a moment. This allows any bubbles or
foam to settle.
7. Check the mixture. If it''s not fully mixed, repeat the shaking process.
Remember, always handle the bottle carefully to avoid any spills or leaks.'
- "Understood. To cap the bottle tightly, please follow these steps: \n\n1. Locate\
\ the cap of the bottle. \n2. Turn the cap counterclockwise until it stops. \n\
3. Apply firm pressure to the cap to ensure it is securely in place. \n4. Shake\
\ the bottle thoroughly to mix the contents. \n\nIs there anything else I can\
\ help you with?"
- 'Sure, here are ten facts about wood:
1. Wood is a natural material made from the cells of trees. It is a composite
material made up of cellulose, lignin, and hemicellulose.
2. Wood is a renewable resource, meaning that it can be replenished over time.
Trees grow back faster than they are cut down, making wood a sustainable material.
3. There are many different types of wood, each with its own unique properties
and characteristics. Some of the most common types of wood include oak, pine,
maple, and walnut.
4. Wood has been used for centuries as a building material. It is a strong and
durable material that can be used for a variety of structures, from houses to
bridges to skyscrapers.
5. Wood is also used for furniture making, flooring, and other decorative purposes.
It can be stained, painted, or varnished to give it a finished look.
6. Wood is a good insulation material. It is able to trap air inside its cells,
making it an effective insulation against cold temperatures.
7. Wood is a good conductor of heat and electricity. This makes it useful for
cooking and heating applications, as well as for electrical components like switches
and outlets.
8. Wood can be bent, shaped, and carved to create a variety of different designs
and shapes. This makes it a popular material for crafting and art projects.
9. Wood is biodegradable, meaning that it can break down over time and return
to the earth without harming the environment.
10. Wood is a versatile material that can be used for a wide range of applications.
It is aesthetically pleasing, durable, and renewable, making it a popular choice
for many different purposes.'
- source_sentence: "query: \n\nHuman: 2 Policemen conversation :\nA: How are you ?\n\
B: I am fine thanks god.\nA: What's up tonigt ?\nB: NAME_1 is going to get the\
\ supply. NAME_2 is going to bring NAME_3\n\nQ: What important topics in the conversation\
\ ?\n\nAssistant: "
sentences:
- 'The important topics in the conversation are:
1. The well-being of person B, who responds that they are fine.
2. The plans for tonight, specifically the involvement of NAME_1, NAME_2, and
NAME_3 in some kind of supply arrangement.'
- "The answer to this expression is 24. \nHere's how to solve it step by step: \n\
1. Start with the parentheses: 8 - 2 = 6 \n2. Multiply 3 by 6: 3 * 6 = 18 \n3.\
\ Divide 10 by 2: 10 // 2 = 5 \n4. Add 18 and 5: 18 + 5 = 23 \nTherefore, the\
\ final answer is 23."
- The important topics in this conversation are the well-being of the individuals
involved, their plans for the evening, and the task of obtaining supplies.
- source_sentence: "query: \n\nHuman: 20 English jargon, in order of frequency of\
\ use, that are widely used in the [manufacturing field of environmental home\
\ appliances (water purifier, air purifier, bidet, etc.)] \n\nWrite the English\
\ words that match Jargon and the description for each within 20 words. And make\
\ a table with three columns 1. Jargon(Word) 2. Matching English word(Short form)\
\ 3. Description\n\nAssistant: "
sentences:
- 'Sure, here''s a table with some common jargon used in the manufacturing field
of environmental home appliances:
| Jargon(Word) | Matching English Word(Short Form) | Description |
| --- | --- | --- |
| 1. 제조 (Manufacturing) | MFG | The process of making products on a large scale
using machinery. |
| 2. 공기 청정기 (Air Purifier) | AP | A device which removes contaminants from the
air in a room. |
| 3. 수질 정화 (Water Purification) | WP | The process of removing undesirable chemicals,
biological contaminants from contaminated water. |
| 4. 비데 (Bidet) | BD | A plumbing fixture or type of sink intended for washing
the genitalia, perineum, inner buttocks, and anus. |
| 5. 에너지 효율 (Energy Efficiency) | EE | Using less energy to provide the same service.
|
| 6. 필터 (Filter) | FLT | A device used to remove impurities or solid particles
from a liquid or gas. |
| 7. 헤파 필터 (HEPA Filter) | HF | A type of mechanical air filter; it works by forcing
air through a fine mesh. |
| 8. 이온 교환 (Ion Exchange) | IE | A water treatment process commonly used for water
softening or purification. |
| 9. 탄소 필터 (Carbon Filter) | CF | A method of filtering that uses a piece of activated
carbon to remove contaminants and impurities. |
| 10. 오존 발생기 (Ozone Generator) | OG | A device designed to produce the gas ozone,
often used in air purification. |
| 11. 초음파 가습기 (Ultrasonic Humidifier) | UH | A device that uses ultrasonic sound
vibrations to produce water droplets that quietly exit the humidifier in a cool
mist. |
| 12. 제품 생명 주기 (Product Lifecycle) | PLC | The process of managing the entire
lifecycle of a product from inception, through engineering design and manufacture.
|
| 13. 품질 관리 (Quality Control) | QC | A process that ensures that a product or
service has reached the desired quality. |
| 14. 지속 가능 (Sustainability) | SUST | The process of maintaining change in a balanced
environment. |
| 15. 에너지 별 등급 (Energy Star Rating) | ESR | A standard for energy efficient consumer
products. |
| 16. 소음 (Noise) | NS | Unwanted or disturbing sound in the environment. |
| 17. 에너지 소비 (Energy Consumption) | EC | The amount of energy consumed by business
processes or systems. |
| 18. 제품 인증 (Product Certification) | PC | The process of certifying that a product
has passed performance tests and quality assurance tests. |
| 19. 생산 능력 (Production Capacity) | PCAP | The maximum amount that something can
be produced. |
| 20. 공급망 관리 (Supply Chain Management) | SCM | The management of the flow of goods
and services. |'
- '1. Pneumonoultramicroscopicsilicovolcanoconiosis: the longest word in the English
language, meaning the dust of volcanoes that can cause lung damage.
2. Floccinaucinihilipilification: the act of describing something as worthless.
3. Hippopotomonstrosesquippedaliophobia: the fear of long words.
4. Antidisestablishmentarianism: a political movement in the United Kingdom that
opposes the separation of the Church of England from the state.
5. Supercalifragilisticexpialidocious: a made-up word from the movie "Mary Poppins"
that means "utterly remarkable" or "fantastic."
6. Llanfairpwllgwyngyllgogerychwyrndrobwllllantysiliogogogoch: the longest place
name in Europe, located in Wales, and means "St. Mary''s church in the hollow
of the white hazel tree near the rapid whirlpool in the church of St. Tysilio
by the red cave."
7. Pneumonoultramicroscopicsilicovolcanoconiosis: the longest word in the English
language, meaning the dust of volcanoes that can cause lung damage.
8. Pluperfect: a grammatical term that refers to a past tense that is more past
than the simple past.
9. Oppidan: a term used in medieval times to describe a town or city.
10. Discombobulate: to confuse or disconcert someone.'
- '| Jargon(Word) | Matching English word(Short form) | Description |
| --- | --- | --- |
| 제조 | Manufacturing (Mfg) | The process of making products on a large scale using
machinery. |
| 환경 친화적 | Eco-friendly (Eco) | Products designed to cause minimal harm to the
environment. |
| 공기 청정기 | Air Purifier (AP) | A device which removes contaminants from the air
in a room. |
| 정수기 | Water Purifier (WP) | A device that removes impurities by lowering contamination
of water. |
| 비데 | Bidet (Bd) | A plumbing fixture used for washing the human genitalia. |
| 에너지 효율 | Energy Efficiency (EE) | Using less energy to provide the same level
of energy service. |
| 제품 생명주기 | Product Lifecycle (PLC) | The process a product goes through from
conception to withdrawal or end of life. |
| 품질 관리 | Quality Control (QC) | A process that ensures products and services
meet specified requirements. |
| 공급망 관리 | Supply Chain Management (SCM) | The management of the flow of goods
and services. |
| 재고 관리 | Inventory Management (IM) | Supervising and controlling of the ordering,
storage and use of components. |
| 생산 계획 | Production Planning (PP) | Planning of production and manufacturing
modules in a company. |
| 유지 보수 | Maintenance (Mnt) | The process of maintaining or preserving someone
or something. |
| 제품 설계 | Product Design (PD) | The process of creating a new product to be sold
by a business. |
| 가격 책정 | Pricing (Pr) | The process whereby a business sets the price at which
it will sell its products. |
| 판매 예측 | Sales Forecasting (SF) | The process of estimating future sales. |
| 시장 조사 | Market Research (MR) | The action or activity of gathering information
about consumers'' needs and preferences. |
| 소비자 행동 | Consumer Behavior (CB) | The study of how individual customers, groups
or organizations select, buy, use, and dispose ideas, goods, and services. |
| 경쟁 분석 | Competitive Analysis (CA) | Identifying your competitors and evaluating
their strategies. |
| 마케팅 전략 | Marketing Strategy (MS) | An overall game plan of a business. |
| 제품 개발 | Product Development (PD) | The creation of products with new or different
characteristics that offer new or additional benefits to the customer. |'
- source_sentence: "query: \n\nHuman: 2 cars take 4 hours to drive from Stockholm\
\ to Copenhagen. Explain how long it takes for 6 cars to drive the same distance.\n\
\nAssistant: "
sentences:
- 6 cars would take 2 hours and 40 minutes to drive from Stockholm to Copenhagen.
This is because the total distance remains the same, but with more cars, the time
is divided by the number of cars. So, 4 hours divided by 2 cars is 2 hours, and
then divided by 3 (since there are now 6 cars) is 40 minutes.
- '1. Promote education: Invest in quality education for all, ensuring that every
child has access to a good education regardless of their background or location.
2. Reduce poverty: Implement policies and programs that address poverty and inequality,
such as providing social safety nets, job training, and access to basic services.
3. Protect the environment: Encourage sustainable practices and policies that
protect the environment, such as reducing carbon emissions, promoting renewable
energy sources, and conserving natural resources.
4. Foster equality and inclusivity: Promote equal rights and opportunities for
all individuals, regardless of their gender, race, ethnicity, or socioeconomic
status.
5. Improve healthcare access: Ensure that everyone has access to affordable and
quality healthcare services, including preventive care, mental health support,
and essential medications.
6. Support sustainable development: Encourage economic growth that is environmentally
sustainable and socially inclusive, focusing on long-term benefits rather than
short-term gains.
7. Foster peace and conflict resolution: Promote peaceful dialogue, diplomacy,
and mediation to resolve conflicts and prevent violence, both at the national
and international levels.
8. Encourage innovation and technology: Invest in research and development, and
support the use of technology to address global challenges, improve efficiency,
and enhance quality of life.
9. Promote cultural understanding and tolerance: Encourage dialogue and understanding
between different cultures, religions, and ethnicities to foster peace, respect,
and appreciation for diversity.
10. Empower individuals and communities: Support initiatives that empower individuals
and communities to take control of their own lives, such as providing access to
microfinance, entrepreneurship training, and community development programs.'
- The number of cars doesn't affect the time it takes to travel from Stockholm to
Copenhagen, assuming each car is traveling independently and under the same conditions.
If it takes one car 4 hours to make the journey, it will also take each of the
other cars the same amount of time, 4 hours, to travel the same distance. So,
whether you have 2 cars or 6 cars, each car would still take 4 hours to travel
from Stockholm to Copenhagen.
model-index:
- name: SentenceTransformer based on Qwen/Qwen2.5-0.5B-Instruct
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoClimateFEVER
type: NanoClimateFEVER
metrics:
- type: cosine_accuracy@1
value: 0.34
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.56
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.66
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.34
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.18666666666666668
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.14400000000000002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.094
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.17566666666666667
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.25733333333333336
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.314
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.37733333333333335
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.33221935140013725
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4347142857142858
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2716328809847145
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoDBPedia
type: NanoDBPedia
metrics:
- type: cosine_accuracy@1
value: 0.62
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.82
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.86
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.92
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.62
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.5333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.452
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.35600000000000004
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.05514928831379628
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.12824676654105222
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.1803700603108436
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.24696556231208447
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4605501396183746
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.7195555555555555
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.3188311328922227
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoFEVER
type: NanoFEVER
metrics:
- type: cosine_accuracy@1
value: 0.5
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.76
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.86
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.15600000000000003
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.088
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.48
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.68
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.74
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.83
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6622622205864791
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.621
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6078403686230498
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoFiQA2018
type: NanoFiQA2018
metrics:
- type: cosine_accuracy@1
value: 0.2
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.46
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.48
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.2
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.18666666666666668
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.124
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.088
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.10833333333333334
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.29285714285714287
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.32685714285714285
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.4266587301587302
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.30836658504994896
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3335
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2463932547900683
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoHotpotQA
type: NanoHotpotQA
metrics:
- type: cosine_accuracy@1
value: 0.56
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.66
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.74
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.86
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.56
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.204
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.13
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.28
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.45
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.51
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.65
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5551540336286506
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6367380952380952
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.48003177140687037
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoMSMARCO
type: NanoMSMARCO
metrics:
- type: cosine_accuracy@1
value: 0.26
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.44
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.68
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.26
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.14666666666666664
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.10000000000000002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.068
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.26
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.44
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.68
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4463469261614279
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3753015873015873
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.38958758121329945
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoNFCorpus
type: NanoNFCorpus
metrics:
- type: cosine_accuracy@1
value: 0.34
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.54
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.54
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.54
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.34
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.33333333333333326
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.284
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.222
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.020833049564977436
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.06222722130306728
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.07578678014791711
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.10345061072897106
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.2714354367850273
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4266666666666666
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.10782010581556394
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoNQ
type: NanoNQ
metrics:
- type: cosine_accuracy@1
value: 0.24
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.54
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.66
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.24
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.18
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.12
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.22
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.51
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.55
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.62
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.44040013881094764
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3960238095238095
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.3890389926031548
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoQuoraRetrieval
type: NanoQuoraRetrieval
metrics:
- type: cosine_accuracy@1
value: 0.58
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.82
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.86
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.58
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2866666666666667
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.18799999999999997
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10799999999999998
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.524
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.7306666666666666
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.7613333333333333
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.8286666666666668
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.7083837412625251
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6884444444444445
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6681604625222274
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoSCIDOCS
type: NanoSCIDOCS
metrics:
- type: cosine_accuracy@1
value: 0.36
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.6
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.68
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.8
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.36
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.28
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.21199999999999997
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.142
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.07200000000000001
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.17366666666666664
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.21866666666666665
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.29366666666666663
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.2888407480508624
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4935555555555555
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.21255218678856397
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoArguAna
type: NanoArguAna
metrics:
- type: cosine_accuracy@1
value: 0.1
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.38
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.48
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.62
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.1
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.12666666666666665
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.09600000000000002
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.06200000000000001
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.1
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.38
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.48
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.62
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.3529452727706292
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.26869047619047615
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.2836337995210993
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoSciFact
type: NanoSciFact
metrics:
- type: cosine_accuracy@1
value: 0.34
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.58
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.62
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.68
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.34
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.2
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.132
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.07600000000000001
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.305
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.55
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.585
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.665
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4924161250817683
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.4540238095238096
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.43656789834287063
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: NanoTouche2020
type: NanoTouche2020
metrics:
- type: cosine_accuracy@1
value: 0.5102040816326531
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.7551020408163265
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.8571428571428571
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9795918367346939
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5102040816326531
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.4625850340136054
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.42040816326530617
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.3653061224489796
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.03700006081489567
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.09740661891768902
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.144055646079894
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.24256181199166754
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4137614534395401
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.6687074829931973
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.31389919569146446
name: Cosine Map@100
- task:
type: nano-beir
name: Nano BEIR
dataset:
name: NanoBEIR mean
type: NanoBEIR_mean
metrics:
- type: cosine_accuracy@1
value: 0.38078492935635794
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5980847723704866
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6536263736263735
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7476609105180534
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.38078492935635794
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.26583987441130297
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.2024929356357928
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.14379277864992152
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.20292172297643613
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.3655695704835091
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.41431304841506145
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.5064848755275477
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4410063209727938
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5013016745159603
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.36353766393808995
name: Cosine Map@100
---
# SentenceTransformer based on Qwen/Qwen2.5-0.5B-Instruct
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct). It maps sentences & paragraphs to a 896-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) <!-- at revision 7ae557604adf67be50417f59c2c2f167def9a775 -->
- **Maximum Sequence Length:** 1024 tokens
- **Output Dimensionality:** 896 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: Qwen2Model
(1): Pooling({'word_embedding_dimension': 896, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("AlexWortega/qwen_emb_6k")
# Run inference
sentences = [
'query: \n\nHuman: 2 cars take 4 hours to drive from Stockholm to Copenhagen. Explain how long it takes for 6 cars to drive the same distance.\n\nAssistant: ',
"The number of cars doesn't affect the time it takes to travel from Stockholm to Copenhagen, assuming each car is traveling independently and under the same conditions. If it takes one car 4 hours to make the journey, it will also take each of the other cars the same amount of time, 4 hours, to travel the same distance. So, whether you have 2 cars or 6 cars, each car would still take 4 hours to travel from Stockholm to Copenhagen.",
'6 cars would take 2 hours and 40 minutes to drive from Stockholm to Copenhagen. This is because the total distance remains the same, but with more cars, the time is divided by the number of cars. So, 4 hours divided by 2 cars is 2 hours, and then divided by 3 (since there are now 6 cars) is 40 minutes.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 896]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `NanoClimateFEVER`, `NanoDBPedia`, `NanoFEVER`, `NanoFiQA2018`, `NanoHotpotQA`, `NanoMSMARCO`, `NanoNFCorpus`, `NanoNQ`, `NanoQuoraRetrieval`, `NanoSCIDOCS`, `NanoArguAna`, `NanoSciFact` and `NanoTouche2020`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoMSMARCO | NanoNFCorpus | NanoNQ | NanoQuoraRetrieval | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 |
|:--------------------|:-----------------|:------------|:-----------|:-------------|:-------------|:------------|:-------------|:-----------|:-------------------|:------------|:------------|:------------|:---------------|
| cosine_accuracy@1 | 0.34 | 0.62 | 0.5 | 0.2 | 0.56 | 0.26 | 0.34 | 0.24 | 0.58 | 0.36 | 0.1 | 0.34 | 0.5102 |
| cosine_accuracy@3 | 0.5 | 0.82 | 0.7 | 0.46 | 0.66 | 0.44 | 0.54 | 0.54 | 0.8 | 0.6 | 0.38 | 0.58 | 0.7551 |
| cosine_accuracy@5 | 0.56 | 0.86 | 0.76 | 0.48 | 0.74 | 0.5 | 0.54 | 0.6 | 0.82 | 0.68 | 0.48 | 0.62 | 0.8571 |
| cosine_accuracy@10 | 0.66 | 0.92 | 0.86 | 0.6 | 0.86 | 0.68 | 0.54 | 0.66 | 0.86 | 0.8 | 0.62 | 0.68 | 0.9796 |
| cosine_precision@1 | 0.34 | 0.62 | 0.5 | 0.2 | 0.56 | 0.26 | 0.34 | 0.24 | 0.58 | 0.36 | 0.1 | 0.34 | 0.5102 |
| cosine_precision@3 | 0.1867 | 0.5333 | 0.2333 | 0.1867 | 0.3 | 0.1467 | 0.3333 | 0.18 | 0.2867 | 0.28 | 0.1267 | 0.2 | 0.4626 |
| cosine_precision@5 | 0.144 | 0.452 | 0.156 | 0.124 | 0.204 | 0.1 | 0.284 | 0.12 | 0.188 | 0.212 | 0.096 | 0.132 | 0.4204 |
| cosine_precision@10 | 0.094 | 0.356 | 0.088 | 0.088 | 0.13 | 0.068 | 0.222 | 0.07 | 0.108 | 0.142 | 0.062 | 0.076 | 0.3653 |
| cosine_recall@1 | 0.1757 | 0.0551 | 0.48 | 0.1083 | 0.28 | 0.26 | 0.0208 | 0.22 | 0.524 | 0.072 | 0.1 | 0.305 | 0.037 |
| cosine_recall@3 | 0.2573 | 0.1282 | 0.68 | 0.2929 | 0.45 | 0.44 | 0.0622 | 0.51 | 0.7307 | 0.1737 | 0.38 | 0.55 | 0.0974 |
| cosine_recall@5 | 0.314 | 0.1804 | 0.74 | 0.3269 | 0.51 | 0.5 | 0.0758 | 0.55 | 0.7613 | 0.2187 | 0.48 | 0.585 | 0.1441 |
| cosine_recall@10 | 0.3773 | 0.247 | 0.83 | 0.4267 | 0.65 | 0.68 | 0.1035 | 0.62 | 0.8287 | 0.2937 | 0.62 | 0.665 | 0.2426 |
| **cosine_ndcg@10** | **0.3322** | **0.4606** | **0.6623** | **0.3084** | **0.5552** | **0.4463** | **0.2714** | **0.4404** | **0.7084** | **0.2888** | **0.3529** | **0.4924** | **0.4138** |
| cosine_mrr@10 | 0.4347 | 0.7196 | 0.621 | 0.3335 | 0.6367 | 0.3753 | 0.4267 | 0.396 | 0.6884 | 0.4936 | 0.2687 | 0.454 | 0.6687 |
| cosine_map@100 | 0.2716 | 0.3188 | 0.6078 | 0.2464 | 0.48 | 0.3896 | 0.1078 | 0.389 | 0.6682 | 0.2126 | 0.2836 | 0.4366 | 0.3139 |
#### Nano BEIR
* Dataset: `NanoBEIR_mean`
* Evaluated with [<code>NanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.NanoBEIREvaluator)
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.3808 |
| cosine_accuracy@3 | 0.5981 |
| cosine_accuracy@5 | 0.6536 |
| cosine_accuracy@10 | 0.7477 |
| cosine_precision@1 | 0.3808 |
| cosine_precision@3 | 0.2658 |
| cosine_precision@5 | 0.2025 |
| cosine_precision@10 | 0.1438 |
| cosine_recall@1 | 0.2029 |
| cosine_recall@3 | 0.3656 |
| cosine_recall@5 | 0.4143 |
| cosine_recall@10 | 0.5065 |
| **cosine_ndcg@10** | **0.441** |
| cosine_mrr@10 | 0.5013 |
| cosine_map@100 | 0.3635 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 1,580,101 training samples
* Columns: <code>query</code>, <code>response</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | query | response | negative |
|:--------|:--------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 14 tokens</li><li>mean: 149.45 tokens</li><li>max: 1024 tokens</li></ul> | <ul><li>min: 2 tokens</li><li>mean: 249.27 tokens</li><li>max: 1024 tokens</li></ul> | <ul><li>min: 1 tokens</li><li>mean: 174.24 tokens</li><li>max: 1024 tokens</li></ul> |
* Samples:
| query | response | negative |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>query: <br><br>Human: 0.002 = 1000 <br>1 = x?<br><br>Assistant: </code> | <code>To find the value of x, we can set up a proportion using the given information:<br><br>0.002/1000 = 1/x<br><br>To solve for x, we can cross multiply:<br><br>0.002 * x = 1000 * 1<br><br>0.002x = 1000<br><br>Dividing both sides by 0.002:<br><br>x = 1000 / 0.002<br><br>x = 500,000<br><br>Therefore, 1 is equal to 500,000 in this proportion.</code> | <code>I apologize, but the equation you provided is not correct.<br><br>0.002 is equal to 0.002, not 1000.<br><br>1 is equal to x, not a number.<br><br>Can you please provide a correct equation or problem for me to solve?</code> |
| <code>query: <br><br>Human: 0:00<br>what's going on guys it's NAME_1 from<br>0:01<br>NAME_2 and today we're going to<br>0:03<br>talk about cns fatigue<br>0:05<br>so going over uh cns fatigue central<br>0:07<br>nervous system fatigue it's something<br>0:08<br>that people talk about and bring up a<br>0:10<br>lot<br>0:10<br>there's a bit of science to it and<br>0:12<br>there's also a lot of bro<br>0:13<br>culture myth to it and i want to get<br>0:15<br>into that and i'm i'm going to talk<br>0:17<br>really quick about where it comes from<br>0:19<br>why we talk about it this way and then<br>0:21<br>how it kind of gets used and misused<br>0:23<br>to describe a certain training<br>0:25<br>phenomenon that we all experience<br>0:27<br>so i'm going to start out by kind of<br>0:29<br>fleshing out what we're talking about or<br>0:31<br>how it kind of originally became<br>0:34<br>discussed about so we observe a few<br>0:35<br>things when it comes to weight training<br>0:37<br>specifically with lifting heavy<br>0:38<br>the first thing we observe is that true<br>0:40<br>maximal attempts at 90<br>0:42<br>and up rpe nine to ten so that's really<br>0:45<br>hard tri...</code> | <code>Hello! It seems like you're sharing a transcript about Central Nervous System (CNS) fatigue, particularly in relation to weight training. CNS fatigue is a topic that's often discussed in the context of high-intensity workouts, such as weightlifting. <br><br>The transcript you've shared discusses the concept of CNS fatigue, its origins, and how it's often misused to describe certain training phenomena. It also talks about the effects of lifting heavy weights at maximum capacity in every session, which can lead to short-term regression. This is a common experience among weightlifters who push their limits consistently without giving their bodies adequate time to recover.<br><br>The speaker also mentions the common practice of spacing out heavy lifting sessions or incorporating deload periods to prevent this regression. The time between the last heavy attempt and a contest day is also noted to be longer, presumably to allow for adequate recovery and prevent CNS fatigue.<br><br>If you have any specific ques...</code> | <code>0:00<br>in the past<br>0:00<br>with the same lift<br>0:00<br>and the same weight<br>0:00<br>and the same load<br>0:00<br>then<br>0:00<br>if you have a deload day and you come back<br>0:00<br>and you're feeling like you can handle<br>0:00<br> heavier loads<br>0:00<br>with a little bit more energy<br>0:00<br>then the load is typically a little bit<br>0:01<br> lighter than the previous day's load<br>0:02<br>and the time between contest day and the<br>0:03<br>last heavy attempt<br>0:03<br>is typically a lot shorter<br>0:04<br>than the time between the previous day's<br>0:05<br>heavy attempt and the contest day<br>0:06<br>and that's a general observation that has<br>0:08<br>also been seen in other strength training<br>0:09<br>styles<br>0:10<br>and it's something that's been uh<br>0:12<br>accounted for in the wisdom of<br>0:13<br> trainers and coaches over the years<br>0:14<br>and it's a good idea to keep in mind when<br>0:16<br>you're lifting heavy or you're training in<br>0:17<br>a deloads schedule<br>0:18<br>and you have a lot of energy and you're<br>0:20<br>able to handle heavier loads<br>0:21<br>the load is typically a little bit<br>0:22<br> lighter than...</code> |
| <code>query: <br><br>Human: 0:00<br>what's going on guys it's NAME_1 from<br>0:01<br>NAME_2 and today we're going to<br>0:03<br>talk about cns fatigue<br>0:05<br>so going over uh cns fatigue central<br>0:07<br>nervous system fatigue it's something<br>0:08<br>that people talk about and bring up a<br>0:10<br>lot<br>0:10<br>there's a bit of science to it and<br>0:12<br>there's also a lot of bro<br>0:13<br>culture myth to it and i want to get<br>0:15<br>into that and i'm i'm going to talk<br>0:17<br>really quick about where it comes from<br>0:19<br>why we talk about it this way and then<br>0:21<br>how it kind of gets used and misused<br>0:23<br>to describe a certain training<br>0:25<br>phenomenon that we all experience<br>0:27<br>so i'm going to start out by kind of<br>0:29<br>fleshing out what we're talking about or<br>0:31<br>how it kind of originally became<br>0:34<br>discussed about so we observe a few<br>0:35<br>things when it comes to weight training<br>0:37<br>specifically with lifting heavy<br>0:38<br>the first thing we observe is that true<br>0:40<br>maximal attempts at 90<br>0:42<br>and up rpe nine to ten so that's really<br>0:45<br>hard tri...</code> | <code>0:00<br>Hey there! Today we're going to talk about CNS fatigue, or central nervous system fatigue. This is a term that is often used in the fitness community, but there is some confusion and misinformation surrounding it. So let's dive in and discuss what it really means and how it affects our training.<br><br>First, let's define what we mean by CNS fatigue. This is the idea that our central nervous system, which controls our brain and spinal cord, can become fatigued from intense physical activity. This can manifest as a decrease in performance, strength, and coordination.<br><br>One of the main observations that has led to the concept of CNS fatigue is that when we lift heavy weights at a high intensity, such as a 9 or 10 on the RPE scale, we tend to see short-term regression in our performance. This means that if we continue to lift at this intensity every session, we may eventually see a decrease in our strength and ability to handle heavy loads.<br><br>Additionally, we also see that the time between a...</code> | <code>Hi, I'm an AI language model. How can I assist you today?</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `gradient_accumulation_steps`: 32
- `learning_rate`: 2e-05
- `max_grad_norm`: 0.4
- `num_train_epochs`: 1
- `warmup_ratio`: 0.4
- `bf16`: True
- `prompts`: {'query': 'query: ', 'answer': 'document: '}
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 32
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 0.4
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.4
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: {'query': 'query: ', 'answer': 'document: '}
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | NanoClimateFEVER_cosine_ndcg@10 | NanoDBPedia_cosine_ndcg@10 | NanoFEVER_cosine_ndcg@10 | NanoFiQA2018_cosine_ndcg@10 | NanoHotpotQA_cosine_ndcg@10 | NanoMSMARCO_cosine_ndcg@10 | NanoNFCorpus_cosine_ndcg@10 | NanoNQ_cosine_ndcg@10 | NanoQuoraRetrieval_cosine_ndcg@10 | NanoSCIDOCS_cosine_ndcg@10 | NanoArguAna_cosine_ndcg@10 | NanoSciFact_cosine_ndcg@10 | NanoTouche2020_cosine_ndcg@10 | NanoBEIR_mean_cosine_ndcg@10 |
|:------:|:----:|:-------------:|:-------------------------------:|:--------------------------:|:------------------------:|:---------------------------:|:---------------------------:|:--------------------------:|:---------------------------:|:---------------------:|:---------------------------------:|:--------------------------:|:--------------------------:|:--------------------------:|:-----------------------------:|:----------------------------:|
| 0.0016 | 10 | 2.3323 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0032 | 20 | 2.2923 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0049 | 30 | 2.2011 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0065 | 40 | 2.4198 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0081 | 50 | 2.4304 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0097 | 60 | 2.35 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0113 | 70 | 2.4141 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0130 | 80 | 2.4043 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0146 | 90 | 2.2222 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0162 | 100 | 2.4379 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0178 | 110 | 2.4722 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0194 | 120 | 2.9719 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0211 | 130 | 2.5376 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0227 | 140 | 2.4272 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0243 | 150 | 2.1056 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0259 | 160 | 2.1292 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0275 | 170 | 1.9443 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0292 | 180 | 1.8512 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0308 | 190 | 1.7141 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0324 | 200 | 1.8382 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0340 | 210 | 1.7891 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0356 | 220 | 1.6014 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0373 | 230 | 1.5022 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0389 | 240 | 1.412 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0405 | 250 | 1.3756 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0421 | 260 | 2.6414 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0437 | 270 | 1.6938 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0454 | 280 | 2.953 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0470 | 290 | 2.9116 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0486 | 300 | 1.273 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0502 | 310 | 1.4269 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0518 | 320 | 1.5998 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0535 | 330 | 1.5939 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0551 | 340 | 1.4772 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0567 | 350 | 1.162 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0583 | 360 | 1.4587 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0599 | 370 | 1.5296 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0616 | 380 | 1.6156 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0632 | 390 | 1.3018 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0648 | 400 | 1.5415 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0664 | 410 | 1.5115 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0680 | 420 | 2.435 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0697 | 430 | 1.7281 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0713 | 440 | 2.0099 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0729 | 450 | 1.2842 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0745 | 460 | 1.4389 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0761 | 470 | 1.3 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0778 | 480 | 1.3392 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0794 | 490 | 1.0975 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0810 | 500 | 1.2641 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0826 | 510 | 1.2011 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0842 | 520 | 1.3416 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0859 | 530 | 1.3424 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0875 | 540 | 1.29 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0891 | 550 | 1.383 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0907 | 560 | 0.971 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0923 | 570 | 1.0089 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0940 | 580 | 0.974 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0956 | 590 | 0.9482 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.0972 | 600 | 1.2337 | 0.1808 | 0.1027 | 0.2711 | 0.0854 | 0.1819 | 0.0510 | 0.0569 | 0.0039 | 0.6725 | 0.0714 | 0.3804 | 0.3058 | 0.3047 | 0.2053 |
| 0.0988 | 610 | 1.1811 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1004 | 620 | 1.0868 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1021 | 630 | 1.1908 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1037 | 640 | 1.0508 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1053 | 650 | 1.097 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1069 | 660 | 0.9266 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1085 | 670 | 1.2172 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1102 | 680 | 1.1388 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1118 | 690 | 1.1859 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1134 | 700 | 0.8618 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1150 | 710 | 1.0641 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1167 | 720 | 1.1092 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1183 | 730 | 0.7565 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1199 | 740 | 0.7026 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1215 | 750 | 1.0661 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1231 | 760 | 1.3258 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1248 | 770 | 1.5056 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1264 | 780 | 1.0812 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1280 | 790 | 1.0357 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1296 | 800 | 1.2638 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1312 | 810 | 1.7064 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1329 | 820 | 1.4948 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1345 | 830 | 1.0338 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1361 | 840 | 0.9158 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1377 | 850 | 0.9544 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1393 | 860 | 1.8469 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1410 | 870 | 1.3733 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1426 | 880 | 0.8882 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1442 | 890 | 1.0591 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1458 | 900 | 1.0214 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1474 | 910 | 1.0111 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1491 | 920 | 0.783 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1507 | 930 | 0.9901 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1523 | 940 | 1.0508 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1539 | 950 | 1.6198 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1555 | 960 | 1.4054 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1572 | 970 | 2.0936 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1588 | 980 | 2.0536 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1604 | 990 | 1.595 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1620 | 1000 | 1.0133 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1636 | 1010 | 0.8841 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1653 | 1020 | 0.8795 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1669 | 1030 | 0.821 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1685 | 1040 | 0.9551 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1701 | 1050 | 0.8831 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1717 | 1060 | 0.8877 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1734 | 1070 | 0.9293 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1750 | 1080 | 1.1628 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1766 | 1090 | 1.0334 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1782 | 1100 | 0.9041 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1798 | 1110 | 0.8715 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1815 | 1120 | 0.6835 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1831 | 1130 | 0.9067 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1847 | 1140 | 0.9845 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1863 | 1150 | 0.9605 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1879 | 1160 | 0.9137 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1896 | 1170 | 0.8297 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1912 | 1180 | 0.9854 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1928 | 1190 | 1.0456 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1944 | 1200 | 0.8366 | 0.2868 | 0.2325 | 0.5528 | 0.1413 | 0.2869 | 0.0953 | 0.1302 | 0.0794 | 0.7002 | 0.1748 | 0.4492 | 0.3688 | 0.3810 | 0.2984 |
| 0.1960 | 1210 | 0.7654 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1977 | 1220 | 0.977 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.1993 | 1230 | 0.64 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2009 | 1240 | 1.3624 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2025 | 1250 | 1.2971 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2041 | 1260 | 1.1123 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2058 | 1270 | 0.9836 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2074 | 1280 | 0.7819 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2090 | 1290 | 0.8977 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2106 | 1300 | 0.9156 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2122 | 1310 | 0.8029 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2139 | 1320 | 1.1394 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2155 | 1330 | 0.9088 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2171 | 1340 | 0.8174 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2187 | 1350 | 1.3159 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2203 | 1360 | 1.0255 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2220 | 1370 | 1.1159 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2236 | 1380 | 0.9766 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2252 | 1390 | 0.9058 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2268 | 1400 | 0.88 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2284 | 1410 | 0.8224 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2301 | 1420 | 0.6394 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2317 | 1430 | 0.7517 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2333 | 1440 | 0.8308 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2349 | 1450 | 0.811 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2365 | 1460 | 0.8963 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2382 | 1470 | 0.9781 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2398 | 1480 | 0.8422 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2414 | 1490 | 0.8144 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2430 | 1500 | 0.7655 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2446 | 1510 | 0.6322 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2463 | 1520 | 0.6661 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2479 | 1530 | 0.7723 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2495 | 1540 | 0.7734 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2511 | 1550 | 0.8246 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2527 | 1560 | 0.7604 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2544 | 1570 | 0.8196 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2560 | 1580 | 0.7278 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2576 | 1590 | 0.7076 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2592 | 1600 | 0.6913 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2608 | 1610 | 0.6974 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2625 | 1620 | 0.7015 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2641 | 1630 | 0.677 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2657 | 1640 | 0.7185 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2673 | 1650 | 0.665 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2689 | 1660 | 0.7026 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2706 | 1670 | 0.6374 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2722 | 1680 | 0.652 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2738 | 1690 | 0.7426 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2754 | 1700 | 0.6444 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2770 | 1710 | 0.663 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2787 | 1720 | 0.6476 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2803 | 1730 | 0.6857 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2819 | 1740 | 0.6229 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2835 | 1750 | 0.5756 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2851 | 1760 | 0.6839 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2868 | 1770 | 0.8267 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2884 | 1780 | 0.8146 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2900 | 1790 | 0.7093 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2916 | 1800 | 0.7307 | 0.2597 | 0.2742 | 0.6859 | 0.2218 | 0.4912 | 0.2921 | 0.1728 | 0.3219 | 0.7381 | 0.2529 | 0.4898 | 0.4819 | 0.5037 | 0.3989 |
| 0.2932 | 1810 | 0.606 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2949 | 1820 | 0.6338 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2965 | 1830 | 0.5849 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2981 | 1840 | 0.699 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.2997 | 1850 | 0.6164 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3013 | 1860 | 0.574 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3030 | 1870 | 0.5819 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3046 | 1880 | 0.5177 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3062 | 1890 | 0.6006 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3078 | 1900 | 0.6981 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3094 | 1910 | 0.885 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3111 | 1920 | 1.2742 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3127 | 1930 | 0.7133 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3143 | 1940 | 0.7271 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3159 | 1950 | 1.3258 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3175 | 1960 | 1.2689 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3192 | 1970 | 0.6723 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3208 | 1980 | 0.3596 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3224 | 1990 | 0.4078 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3240 | 2000 | 0.287 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3256 | 2010 | 0.2375 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3273 | 2020 | 0.2259 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3289 | 2030 | 0.3889 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3305 | 2040 | 0.7391 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3321 | 2050 | 0.5417 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3338 | 2060 | 0.4933 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3354 | 2070 | 0.426 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3370 | 2080 | 0.4222 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3386 | 2090 | 0.4132 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3402 | 2100 | 0.4133 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3419 | 2110 | 0.3989 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3435 | 2120 | 0.4035 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3451 | 2130 | 0.3804 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3467 | 2140 | 0.3597 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3483 | 2150 | 0.3793 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3500 | 2160 | 0.3633 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3516 | 2170 | 0.3504 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3532 | 2180 | 0.3475 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3548 | 2190 | 0.3467 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3564 | 2200 | 0.3412 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3581 | 2210 | 0.3665 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3597 | 2220 | 0.3585 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3613 | 2230 | 0.3335 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3629 | 2240 | 0.329 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3645 | 2250 | 0.3193 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3662 | 2260 | 0.3256 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3678 | 2270 | 0.325 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3694 | 2280 | 0.3312 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3710 | 2290 | 0.3323 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3726 | 2300 | 0.3192 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3743 | 2310 | 0.3366 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3759 | 2320 | 0.3247 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3775 | 2330 | 0.3207 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3791 | 2340 | 0.3238 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3807 | 2350 | 0.3217 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3824 | 2360 | 0.336 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3840 | 2370 | 0.3043 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3856 | 2380 | 0.3043 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3872 | 2390 | 0.3193 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3888 | 2400 | 0.3145 | 0.2338 | 0.4041 | 0.7329 | 0.2612 | 0.4511 | 0.3624 | 0.2742 | 0.3903 | 0.2020 | 0.2560 | 0.3127 | 0.5038 | 0.4262 | 0.3701 |
| 0.3905 | 2410 | 0.319 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3921 | 2420 | 0.3097 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3937 | 2430 | 0.2817 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3953 | 2440 | 0.3168 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3969 | 2450 | 0.2941 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.3986 | 2460 | 0.2902 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4002 | 2470 | 0.3095 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4018 | 2480 | 0.3149 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4034 | 2490 | 0.2949 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4050 | 2500 | 0.3057 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4067 | 2510 | 0.2982 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4083 | 2520 | 0.3064 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4099 | 2530 | 0.3169 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4115 | 2540 | 0.2922 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4131 | 2550 | 0.2999 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4148 | 2560 | 0.2803 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4164 | 2570 | 0.3118 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4180 | 2580 | 0.309 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4196 | 2590 | 0.2894 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4212 | 2600 | 0.3126 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4229 | 2610 | 0.2949 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4245 | 2620 | 0.3204 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4261 | 2630 | 0.2868 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4277 | 2640 | 0.3168 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4293 | 2650 | 0.3245 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4310 | 2660 | 0.316 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4326 | 2670 | 0.2822 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4342 | 2680 | 0.3046 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4358 | 2690 | 0.2908 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4374 | 2700 | 0.2542 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4391 | 2710 | 0.3079 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4407 | 2720 | 0.2821 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4423 | 2730 | 0.2863 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4439 | 2740 | 0.2889 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4455 | 2750 | 0.282 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4472 | 2760 | 0.29 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4488 | 2770 | 0.2973 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4504 | 2780 | 0.3018 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4520 | 2790 | 0.2938 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4536 | 2800 | 0.2835 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4553 | 2810 | 0.2773 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4569 | 2820 | 0.2867 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4585 | 2830 | 0.2954 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4601 | 2840 | 0.3035 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4617 | 2850 | 0.2905 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4634 | 2860 | 0.2821 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4650 | 2870 | 0.2815 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4666 | 2880 | 0.298 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4682 | 2890 | 0.2905 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4698 | 2900 | 0.2821 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4715 | 2910 | 0.2904 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4731 | 2920 | 0.2992 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4747 | 2930 | 0.2834 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4763 | 2940 | 0.2855 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4779 | 2950 | 0.2775 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4796 | 2960 | 0.2994 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4812 | 2970 | 0.2939 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4828 | 2980 | 0.2999 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4844 | 2990 | 0.2935 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4860 | 3000 | 0.2714 | 0.2471 | 0.3962 | 0.7912 | 0.2469 | 0.4488 | 0.3739 | 0.2677 | 0.3976 | 0.1890 | 0.2485 | 0.2962 | 0.4538 | 0.4259 | 0.3679 |
| 0.4877 | 3010 | 0.2819 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4893 | 3020 | 0.2679 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4909 | 3030 | 0.2789 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4925 | 3040 | 0.2865 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4941 | 3050 | 0.2852 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4958 | 3060 | 0.2706 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4974 | 3070 | 0.2935 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.4990 | 3080 | 0.272 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5006 | 3090 | 0.2915 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5022 | 3100 | 0.2826 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5039 | 3110 | 0.2652 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5055 | 3120 | 0.2887 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5071 | 3130 | 0.2613 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5087 | 3140 | 0.283 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5103 | 3150 | 0.2945 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5120 | 3160 | 0.2877 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5136 | 3170 | 0.2889 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5152 | 3180 | 0.268 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5168 | 3190 | 0.2911 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5184 | 3200 | 0.2785 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5201 | 3210 | 0.2711 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5217 | 3220 | 0.2911 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5233 | 3230 | 0.2649 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5249 | 3240 | 0.3054 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5265 | 3250 | 0.2531 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5282 | 3260 | 0.2767 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5298 | 3270 | 0.2853 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5314 | 3280 | 0.2731 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5330 | 3290 | 0.2776 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5346 | 3300 | 0.2725 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5363 | 3310 | 0.281 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5379 | 3320 | 0.2666 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5395 | 3330 | 0.2654 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5411 | 3340 | 0.2909 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5427 | 3350 | 0.2598 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5444 | 3360 | 0.2837 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5460 | 3370 | 0.2855 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5476 | 3380 | 0.2601 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5492 | 3390 | 0.268 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5508 | 3400 | 0.2681 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5525 | 3410 | 0.2663 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5541 | 3420 | 0.2837 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5557 | 3430 | 0.259 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5573 | 3440 | 0.2622 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5590 | 3450 | 0.2825 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5606 | 3460 | 0.2921 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5622 | 3470 | 0.2721 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5638 | 3480 | 0.2797 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5654 | 3490 | 0.2899 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5671 | 3500 | 0.2745 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5687 | 3510 | 0.2665 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5703 | 3520 | 0.2908 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5719 | 3530 | 0.2492 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5735 | 3540 | 0.2562 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5752 | 3550 | 0.2616 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5768 | 3560 | 0.2775 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5784 | 3570 | 0.2736 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5800 | 3580 | 0.2862 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5816 | 3590 | 0.2582 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5833 | 3600 | 0.2547 | 0.2371 | 0.3994 | 0.7786 | 0.2418 | 0.4072 | 0.3469 | 0.2615 | 0.4070 | 0.1551 | 0.2294 | 0.2533 | 0.4270 | 0.4161 | 0.3508 |
| 0.5849 | 3610 | 0.2822 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5865 | 3620 | 0.2622 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5881 | 3630 | 0.2691 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5897 | 3640 | 0.2585 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5914 | 3650 | 0.2927 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5930 | 3660 | 0.2593 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5946 | 3670 | 0.2501 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5962 | 3680 | 0.2796 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5978 | 3690 | 0.2622 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.5995 | 3700 | 0.2508 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6011 | 3710 | 0.2891 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6027 | 3720 | 0.274 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6043 | 3730 | 0.2769 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6059 | 3740 | 0.2617 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6076 | 3750 | 0.2557 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6092 | 3760 | 0.2634 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6108 | 3770 | 0.262 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6124 | 3780 | 0.2696 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6140 | 3790 | 0.2608 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6157 | 3800 | 0.2592 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6173 | 3810 | 0.2757 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6189 | 3820 | 0.2672 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6205 | 3830 | 0.2523 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6221 | 3840 | 0.2775 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6238 | 3850 | 0.2621 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6254 | 3860 | 0.275 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6270 | 3870 | 0.2727 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6286 | 3880 | 0.2709 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6302 | 3890 | 0.2749 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6319 | 3900 | 0.2844 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6335 | 3910 | 0.2713 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6351 | 3920 | 0.2711 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6367 | 3930 | 0.2523 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6383 | 3940 | 0.2789 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6400 | 3950 | 0.2639 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6416 | 3960 | 0.2609 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6432 | 3970 | 0.2699 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6448 | 3980 | 0.2614 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6464 | 3990 | 0.2567 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6481 | 4000 | 1.2987 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6497 | 4010 | 1.4783 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6513 | 4020 | 1.7162 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6529 | 4030 | 1.2907 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6545 | 4040 | 1.2583 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6562 | 4050 | 1.0498 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6578 | 4060 | 1.8076 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6594 | 4070 | 1.215 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6610 | 4080 | 1.1462 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6626 | 4090 | 0.9511 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6643 | 4100 | 0.6151 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6659 | 4110 | 0.7482 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6675 | 4120 | 0.8572 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6691 | 4130 | 0.7722 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6707 | 4140 | 0.6085 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6724 | 4150 | 0.6644 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6740 | 4160 | 0.6423 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6756 | 4170 | 0.7482 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6772 | 4180 | 0.9649 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6788 | 4190 | 0.9205 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6805 | 4200 | 0.7746 | 0.2822 | 0.4484 | 0.7622 | 0.2944 | 0.5133 | 0.4592 | 0.2717 | 0.4451 | 0.3682 | 0.2594 | 0.2342 | 0.5123 | 0.4209 | 0.4055 |
| 0.6821 | 4210 | 0.5752 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6837 | 4220 | 0.6221 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6853 | 4230 | 0.526 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6869 | 4240 | 0.455 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6886 | 4250 | 0.4964 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6902 | 4260 | 0.935 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6918 | 4270 | 0.6227 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6934 | 4280 | 0.5594 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6950 | 4290 | 0.496 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6967 | 4300 | 0.5907 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6983 | 4310 | 0.5163 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.6999 | 4320 | 0.468 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7015 | 4330 | 0.5214 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7031 | 4340 | 0.625 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7048 | 4350 | 0.593 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7064 | 4360 | 0.5852 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7080 | 4370 | 0.5648 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7096 | 4380 | 0.6791 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7112 | 4390 | 0.7008 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7129 | 4400 | 0.6731 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7145 | 4410 | 0.654 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7161 | 4420 | 0.6135 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7177 | 4430 | 0.6206 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7193 | 4440 | 0.5056 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7210 | 4450 | 0.5201 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7226 | 4460 | 0.5894 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7242 | 4470 | 0.5571 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7258 | 4480 | 0.5979 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7274 | 4490 | 0.6202 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7291 | 4500 | 0.5544 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7307 | 4510 | 0.6122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7323 | 4520 | 0.5631 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7339 | 4530 | 0.5284 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7355 | 4540 | 0.6899 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7372 | 4550 | 0.5838 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7388 | 4560 | 0.6806 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7404 | 4570 | 0.5413 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7420 | 4580 | 0.5956 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7436 | 4590 | 0.6044 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7453 | 4600 | 0.5857 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7469 | 4610 | 0.5664 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7485 | 4620 | 0.5097 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7501 | 4630 | 0.4912 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7517 | 4640 | 0.6049 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7534 | 4650 | 0.5389 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7550 | 4660 | 0.555 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7566 | 4670 | 0.6238 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7582 | 4680 | 0.6447 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7598 | 4690 | 0.5606 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7615 | 4700 | 0.5165 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7631 | 4710 | 0.5839 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7647 | 4720 | 0.5189 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7663 | 4730 | 0.584 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7679 | 4740 | 0.5744 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7696 | 4750 | 0.5351 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7712 | 4760 | 0.5953 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7728 | 4770 | 0.5725 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7744 | 4780 | 0.5688 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7761 | 4790 | 0.5004 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7777 | 4800 | 0.5378 | 0.3514 | 0.4652 | 0.7429 | 0.3103 | 0.5406 | 0.4361 | 0.2797 | 0.4267 | 0.3843 | 0.2727 | 0.3474 | 0.5341 | 0.4249 | 0.4243 |
| 0.7793 | 4810 | 0.5244 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7809 | 4820 | 0.6241 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7825 | 4830 | 0.4844 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7842 | 4840 | 0.4401 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7858 | 4850 | 0.499 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7874 | 4860 | 0.5326 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7890 | 4870 | 0.4981 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7906 | 4880 | 0.5659 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7923 | 4890 | 0.5364 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7939 | 4900 | 0.5479 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7955 | 4910 | 0.4653 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7971 | 4920 | 0.5005 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.7987 | 4930 | 0.5624 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8004 | 4940 | 0.4399 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8020 | 4950 | 0.4859 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8036 | 4960 | 0.5087 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8052 | 4970 | 0.511 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8068 | 4980 | 0.5819 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8085 | 4990 | 0.4462 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8101 | 5000 | 0.4882 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8117 | 5010 | 0.5306 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8133 | 5020 | 0.507 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8149 | 5030 | 0.4471 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8166 | 5040 | 0.5333 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8182 | 5050 | 0.4353 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8198 | 5060 | 0.5615 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8214 | 5070 | 0.5629 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8230 | 5080 | 0.5131 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8247 | 5090 | 0.4789 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8263 | 5100 | 0.4934 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8279 | 5110 | 0.5285 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8295 | 5120 | 0.4414 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8311 | 5130 | 0.5262 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8328 | 5140 | 0.4645 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8344 | 5150 | 0.4532 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8360 | 5160 | 0.4421 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8376 | 5170 | 0.4375 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8392 | 5180 | 0.5234 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8409 | 5190 | 0.4803 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8425 | 5200 | 0.4872 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8441 | 5210 | 0.451 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8457 | 5220 | 0.4388 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8473 | 5230 | 0.5182 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8490 | 5240 | 0.5302 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8506 | 5250 | 0.4643 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8522 | 5260 | 0.5581 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8538 | 5270 | 0.4643 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8554 | 5280 | 0.5288 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8571 | 5290 | 0.4133 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8587 | 5300 | 0.4664 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8603 | 5310 | 0.4814 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8619 | 5320 | 0.5256 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8635 | 5330 | 0.4904 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8652 | 5340 | 0.4495 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8668 | 5350 | 0.5389 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8684 | 5360 | 0.4497 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8700 | 5370 | 0.4776 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8716 | 5380 | 0.5441 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8733 | 5390 | 0.4473 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8749 | 5400 | 0.5598 | 0.3381 | 0.4668 | 0.7306 | 0.3137 | 0.5415 | 0.4550 | 0.2840 | 0.4169 | 0.4719 | 0.2735 | 0.3582 | 0.5311 | 0.4076 | 0.4299 |
| 0.8765 | 5410 | 0.4726 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8781 | 5420 | 0.4966 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8797 | 5430 | 0.4644 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8814 | 5440 | 0.4084 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8830 | 5450 | 0.4913 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8846 | 5460 | 0.5708 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8862 | 5470 | 0.5577 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8878 | 5480 | 0.4839 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8895 | 5490 | 0.461 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8911 | 5500 | 0.4799 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8927 | 5510 | 0.5608 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8943 | 5520 | 0.4625 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8959 | 5530 | 0.4765 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8976 | 5540 | 0.4348 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.8992 | 5550 | 0.4424 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9008 | 5560 | 0.4147 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9024 | 5570 | 0.433 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9040 | 5580 | 0.4628 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9057 | 5590 | 0.4466 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9073 | 5600 | 0.4563 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9089 | 5610 | 0.4508 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9105 | 5620 | 0.4619 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9121 | 5630 | 0.4264 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9138 | 5640 | 0.5157 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9154 | 5650 | 0.4721 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9170 | 5660 | 0.4518 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9186 | 5670 | 0.4101 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9202 | 5680 | 0.4092 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9219 | 5690 | 0.4042 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9235 | 5700 | 0.3852 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9251 | 5710 | 0.375 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9267 | 5720 | 0.3548 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9283 | 5730 | 0.3461 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9300 | 5740 | 0.3396 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9316 | 5750 | 0.3465 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9332 | 5760 | 0.347 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9348 | 5770 | 0.3365 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9364 | 5780 | 0.3299 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9381 | 5790 | 0.3417 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9397 | 5800 | 0.3423 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9413 | 5810 | 0.3512 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9429 | 5820 | 0.3353 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9445 | 5830 | 0.3291 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9462 | 5840 | 0.3162 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9478 | 5850 | 0.3326 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9494 | 5860 | 0.345 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9510 | 5870 | 0.2998 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9526 | 5880 | 0.307 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9543 | 5890 | 0.3019 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9559 | 5900 | 0.3169 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9575 | 5910 | 0.2857 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9591 | 5920 | 0.3018 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9607 | 5930 | 0.2954 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9624 | 5940 | 0.2953 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9640 | 5950 | 0.2861 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9656 | 5960 | 0.3384 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9672 | 5970 | 0.2968 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9688 | 5980 | 0.3191 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9705 | 5990 | 0.3069 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9721 | 6000 | 0.3025 | 0.3322 | 0.4606 | 0.6623 | 0.3084 | 0.5552 | 0.4463 | 0.2714 | 0.4404 | 0.7084 | 0.2888 | 0.3529 | 0.4924 | 0.4138 | 0.4410 |
| 0.9737 | 6010 | 0.2891 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9753 | 6020 | 0.3038 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9769 | 6030 | 0.2931 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9786 | 6040 | 0.3145 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9802 | 6050 | 0.3046 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9818 | 6060 | 0.2896 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9834 | 6070 | 0.2926 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9850 | 6080 | 0.3025 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9867 | 6090 | 0.2798 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9883 | 6100 | 0.3006 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9899 | 6110 | 0.2695 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9915 | 6120 | 0.3017 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9931 | 6130 | 0.2955 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9948 | 6140 | 0.2699 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9964 | 6150 | 0.2955 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9980 | 6160 | 0.2963 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
| 0.9996 | 6170 | 0.2988 | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.0
- Transformers: 4.46.2
- PyTorch: 2.1.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
paulellisprg/deepseek-r1-7b | paulellisprg | "2025-03-29T03:23:45Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"dataset:AI-MO/NuminaMath-TIR",
"arxiv:2402.03300",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Qwen-7B",
"endpoints_compatible",
"region:us"
] | null | "2025-03-28T22:13:14Z" | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
datasets: AI-MO/NuminaMath-TIR
library_name: transformers
model_name: deepseek-r1-7b
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for deepseek-r1-7b
This model is a fine-tuned version of [deepseek-ai/DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) on the [AI-MO/NuminaMath-TIR](https://huggingface.co/datasets/AI-MO/NuminaMath-TIR) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="paulellisprg/deepseek-r1-7b", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.0
- Pytorch: 2.6.0+cu124
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
LoneStriker/Yarn-Mistral-7b-128k-3.0bpw-h6-exl2 | LoneStriker | "2023-11-02T21:20:38Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"custom_code",
"dataset:emozilla/yarn-train-tokenized-16k-mistral",
"arxiv:2309.00071",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-02T19:12:20Z" | ---
datasets:
- emozilla/yarn-train-tokenized-16k-mistral
metrics:
- perplexity
library_name: transformers
---
# Model Card: Nous-Yarn-Mistral-7b-128k
[Preprint (arXiv)](https://arxiv.org/abs/2309.00071)
[GitHub](https://github.com/jquesnelle/yarn)

## Model Description
Nous-Yarn-Mistral-7b-128k is a state-of-the-art language model for long context, further pretrained on long context data for 1500 steps using the YaRN extension method.
It is an extension of [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) and supports a 128k token context window.
To use, pass `trust_remote_code=True` when loading the model, for example
```python
model = AutoModelForCausalLM.from_pretrained("NousResearch/Yarn-Mistral-7b-128k",
use_flash_attention_2=True,
torch_dtype=torch.bfloat16,
device_map="auto",
trust_remote_code=True)
```
In addition you will need to use the latest version of `transformers` (until 4.35 comes out)
```sh
pip install git+https://github.com/huggingface/transformers
```
## Benchmarks
Long context benchmarks:
| Model | Context Window | 8k PPL | 16k PPL | 32k PPL | 64k PPL | 128k PPL |
|-------|---------------:|------:|----------:|-----:|-----:|------------:|
| [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 8k | 2.96 | - | - | - | - |
| [Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) | 64k | 3.04 | 2.65 | 2.44 | 2.20 | - |
| [Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) | 128k | 3.08 | 2.68 | 2.47 | 2.24 | 2.19 |
Short context benchmarks showing that quality degradation is minimal:
| Model | Context Window | ARC-c | Hellaswag | MMLU | Truthful QA |
|-------|---------------:|------:|----------:|-----:|------------:|
| [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 8k | 59.98 | 83.31 | 64.16 | 42.15 |
| [Yarn-Mistral-7b-64k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-64k) | 64k | 59.38 | 81.21 | 61.32 | 42.50 |
| [Yarn-Mistral-7b-128k](https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k) | 128k | 58.87 | 80.58 | 60.64 | 42.46 |
## Collaborators
- [bloc97](https://github.com/bloc97): Methods, paper and evals
- [@theemozilla](https://twitter.com/theemozilla): Methods, paper, model training, and evals
- [@EnricoShippole](https://twitter.com/EnricoShippole): Model training
- [honglu2875](https://github.com/honglu2875): Paper and evals
The authors would like to thank LAION AI for their support of compute for this model.
It was trained on the [JUWELS](https://www.fz-juelich.de/en/ias/jsc/systems/supercomputers/juwels) supercomputer. |
vertings6/ff91abf6-2661-4519-806a-a7b09e19c205 | vertings6 | "2025-01-20T16:23:23Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/llama-3-8b-Instruct",
"base_model:adapter:unsloth/llama-3-8b-Instruct",
"license:llama3",
"region:us"
] | null | "2025-01-20T15:53:57Z" | ---
library_name: peft
license: llama3
base_model: unsloth/llama-3-8b-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: ff91abf6-2661-4519-806a-a7b09e19c205
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/llama-3-8b-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 1dea80ffe15eca7e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/1dea80ffe15eca7e_train_data.json
type:
field_instruction: prompt
field_output: response
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: 1
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: vertings6/ff91abf6-2661-4519-806a-a7b09e19c205
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 78GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/1dea80ffe15eca7e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f101c50f-2260-4a92-9557-65f1f625d334
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f101c50f-2260-4a92-9557-65f1f625d334
warmup_steps: 5
weight_decay: 0.001
xformers_attention: true
```
</details><br>
# ff91abf6-2661-4519-806a-a7b09e19c205
This model is a fine-tuned version of [unsloth/llama-3-8b-Instruct](https://huggingface.co/unsloth/llama-3-8b-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0001 | 1 | nan |
| 0.0 | 0.0007 | 5 | nan |
| 0.0 | 0.0014 | 10 | nan |
| 0.0 | 0.0021 | 15 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kplro/model_proga | kplro | "2023-03-26T17:55:26Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"ru",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-03-26T16:10:00Z" | ---
pipeline_tag: text-generation
language:
- ru
library_name: transformers
--- |
abdullahhatem/ppo-SnowballTarget | abdullahhatem | "2025-03-05T11:25:23Z" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2025-03-05T11:25:02Z" | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: abdullahhatem/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
DoppelReflEx/MiniusLight-24B-test-Q4_K_S-GGUF | DoppelReflEx | "2025-03-04T03:56:04Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:DoppelReflEx/MiniusLight-24B-test",
"base_model:quantized:DoppelReflEx/MiniusLight-24B-test",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-04T03:54:59Z" | ---
base_model: DoppelReflEx/MiniusLight-24B-test
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# DoppelReflEx/MiniusLight-24B-test-Q4_K_S-GGUF
This model was converted to GGUF format from [`DoppelReflEx/MiniusLight-24B-test`](https://huggingface.co/DoppelReflEx/MiniusLight-24B-test) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/DoppelReflEx/MiniusLight-24B-test) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo DoppelReflEx/MiniusLight-24B-test-Q4_K_S-GGUF --hf-file miniuslight-24b-test-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo DoppelReflEx/MiniusLight-24B-test-Q4_K_S-GGUF --hf-file miniuslight-24b-test-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo DoppelReflEx/MiniusLight-24B-test-Q4_K_S-GGUF --hf-file miniuslight-24b-test-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo DoppelReflEx/MiniusLight-24B-test-Q4_K_S-GGUF --hf-file miniuslight-24b-test-q4_k_s.gguf -c 2048
```
|
tz579/example_asr_wav2vec2 | tz579 | "2024-05-26T01:27:44Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"edinburghcstr/ami",
"generated_from_trainer",
"dataset:ami",
"base_model:facebook/wav2vec2-large-lv60",
"base_model:finetune:facebook/wav2vec2-large-lv60",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-05-24T20:28:06Z" | ---
license: apache-2.0
base_model: facebook/wav2vec2-large-lv60
tags:
- automatic-speech-recognition
- edinburghcstr/ami
- generated_from_trainer
datasets:
- ami
metrics:
- wer
model-index:
- name: facebook/wav2vec2-large-lv60
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: EDINBURGHCSTR/AMI - IHM
type: ami
config: ihm
split: None
args: 'Config: ihm, Training split: train, Eval split: validation'
metrics:
- name: Wer
type: wer
value: 0.9542044754234227
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# facebook/wav2vec2-large-lv60
This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the EDINBURGHCSTR/AMI - IHM dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2723
- Wer: 0.9542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:------:|
| 1.0919 | 0.1565 | 1000 | 1.0169 | 0.7064 |
| 1.4768 | 0.3131 | 2000 | 0.7156 | 0.4356 |
| 0.9728 | 0.4696 | 3000 | 0.6462 | 0.4030 |
| 0.5418 | 0.6262 | 4000 | 0.6171 | 0.3707 |
| 0.8492 | 0.7827 | 5000 | 0.5758 | 0.3695 |
| 1.4826 | 0.9393 | 6000 | 0.5801 | 0.3545 |
| 0.3274 | 1.0958 | 7000 | 0.5244 | 0.3375 |
| 0.2089 | 1.2523 | 8000 | 0.5047 | 0.3239 |
| 0.2916 | 1.4089 | 9000 | 0.4901 | 0.3171 |
| 0.1617 | 1.5654 | 10000 | 0.5070 | 0.3151 |
| 0.3815 | 1.7220 | 11000 | 0.4948 | 0.3180 |
| 1.0171 | 1.8785 | 12000 | 0.9465 | 0.8379 |
### Framework versions
- Transformers 4.42.0.dev0
- Pytorch 2.3.0a0+gitcd033a1
- Datasets 2.19.1
- Tokenizers 0.19.1
|
RichardErkhov/GraySwanAI_-_Mistral-7B-Instruct-RR-gguf | RichardErkhov | "2024-09-20T09:48:38Z" | 12 | 0 | null | [
"gguf",
"arxiv:2406.04313",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-09-20T06:36:03Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mistral-7B-Instruct-RR - GGUF
- Model creator: https://huggingface.co/GraySwanAI/
- Original model: https://huggingface.co/GraySwanAI/Mistral-7B-Instruct-RR/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Mistral-7B-Instruct-RR.Q2_K.gguf](https://huggingface.co/RichardErkhov/GraySwanAI_-_Mistral-7B-Instruct-RR-gguf/blob/main/Mistral-7B-Instruct-RR.Q2_K.gguf) | Q2_K | 2.53GB |
| [Mistral-7B-Instruct-RR.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/GraySwanAI_-_Mistral-7B-Instruct-RR-gguf/blob/main/Mistral-7B-Instruct-RR.IQ3_XS.gguf) | IQ3_XS | 2.81GB |
| [Mistral-7B-Instruct-RR.IQ3_S.gguf](https://huggingface.co/RichardErkhov/GraySwanAI_-_Mistral-7B-Instruct-RR-gguf/blob/main/Mistral-7B-Instruct-RR.IQ3_S.gguf) | IQ3_S | 2.96GB |
| [Mistral-7B-Instruct-RR.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/GraySwanAI_-_Mistral-7B-Instruct-RR-gguf/blob/main/Mistral-7B-Instruct-RR.Q3_K_S.gguf) | Q3_K_S | 2.95GB |
| [Mistral-7B-Instruct-RR.IQ3_M.gguf](https://huggingface.co/RichardErkhov/GraySwanAI_-_Mistral-7B-Instruct-RR-gguf/blob/main/Mistral-7B-Instruct-RR.IQ3_M.gguf) | IQ3_M | 3.06GB |
| [Mistral-7B-Instruct-RR.Q3_K.gguf](https://huggingface.co/RichardErkhov/GraySwanAI_-_Mistral-7B-Instruct-RR-gguf/blob/main/Mistral-7B-Instruct-RR.Q3_K.gguf) | Q3_K | 3.28GB |
| [Mistral-7B-Instruct-RR.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/GraySwanAI_-_Mistral-7B-Instruct-RR-gguf/blob/main/Mistral-7B-Instruct-RR.Q3_K_M.gguf) | Q3_K_M | 3.28GB |
| [Mistral-7B-Instruct-RR.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/GraySwanAI_-_Mistral-7B-Instruct-RR-gguf/blob/main/Mistral-7B-Instruct-RR.Q3_K_L.gguf) | Q3_K_L | 3.56GB |
| [Mistral-7B-Instruct-RR.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/GraySwanAI_-_Mistral-7B-Instruct-RR-gguf/blob/main/Mistral-7B-Instruct-RR.IQ4_XS.gguf) | IQ4_XS | 3.67GB |
| [Mistral-7B-Instruct-RR.Q4_0.gguf](https://huggingface.co/RichardErkhov/GraySwanAI_-_Mistral-7B-Instruct-RR-gguf/blob/main/Mistral-7B-Instruct-RR.Q4_0.gguf) | Q4_0 | 3.83GB |
| [Mistral-7B-Instruct-RR.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/GraySwanAI_-_Mistral-7B-Instruct-RR-gguf/blob/main/Mistral-7B-Instruct-RR.IQ4_NL.gguf) | IQ4_NL | 3.87GB |
| [Mistral-7B-Instruct-RR.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/GraySwanAI_-_Mistral-7B-Instruct-RR-gguf/blob/main/Mistral-7B-Instruct-RR.Q4_K_S.gguf) | Q4_K_S | 3.86GB |
| [Mistral-7B-Instruct-RR.Q4_K.gguf](https://huggingface.co/RichardErkhov/GraySwanAI_-_Mistral-7B-Instruct-RR-gguf/blob/main/Mistral-7B-Instruct-RR.Q4_K.gguf) | Q4_K | 4.07GB |
| [Mistral-7B-Instruct-RR.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/GraySwanAI_-_Mistral-7B-Instruct-RR-gguf/blob/main/Mistral-7B-Instruct-RR.Q4_K_M.gguf) | Q4_K_M | 4.07GB |
| [Mistral-7B-Instruct-RR.Q4_1.gguf](https://huggingface.co/RichardErkhov/GraySwanAI_-_Mistral-7B-Instruct-RR-gguf/blob/main/Mistral-7B-Instruct-RR.Q4_1.gguf) | Q4_1 | 4.24GB |
| [Mistral-7B-Instruct-RR.Q5_0.gguf](https://huggingface.co/RichardErkhov/GraySwanAI_-_Mistral-7B-Instruct-RR-gguf/blob/main/Mistral-7B-Instruct-RR.Q5_0.gguf) | Q5_0 | 4.65GB |
| [Mistral-7B-Instruct-RR.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/GraySwanAI_-_Mistral-7B-Instruct-RR-gguf/blob/main/Mistral-7B-Instruct-RR.Q5_K_S.gguf) | Q5_K_S | 4.65GB |
| [Mistral-7B-Instruct-RR.Q5_K.gguf](https://huggingface.co/RichardErkhov/GraySwanAI_-_Mistral-7B-Instruct-RR-gguf/blob/main/Mistral-7B-Instruct-RR.Q5_K.gguf) | Q5_K | 4.78GB |
| [Mistral-7B-Instruct-RR.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/GraySwanAI_-_Mistral-7B-Instruct-RR-gguf/blob/main/Mistral-7B-Instruct-RR.Q5_K_M.gguf) | Q5_K_M | 4.78GB |
| [Mistral-7B-Instruct-RR.Q5_1.gguf](https://huggingface.co/RichardErkhov/GraySwanAI_-_Mistral-7B-Instruct-RR-gguf/blob/main/Mistral-7B-Instruct-RR.Q5_1.gguf) | Q5_1 | 5.07GB |
| [Mistral-7B-Instruct-RR.Q6_K.gguf](https://huggingface.co/RichardErkhov/GraySwanAI_-_Mistral-7B-Instruct-RR-gguf/blob/main/Mistral-7B-Instruct-RR.Q6_K.gguf) | Q6_K | 5.53GB |
| [Mistral-7B-Instruct-RR.Q8_0.gguf](https://huggingface.co/RichardErkhov/GraySwanAI_-_Mistral-7B-Instruct-RR-gguf/blob/main/Mistral-7B-Instruct-RR.Q8_0.gguf) | Q8_0 | 7.17GB |
Original model description:
# Model Details
Mistral-7B-Instruct-RR is a Mistral-7B model with circuit breakers inserted using Representation Rerouting (RR).
Circuit Breaking is a new approach inspired by [representation engineering](https://ai-transparency.org/), designed to prevent AI systems from generating harmful content by directly altering harmful model representations, with minimal capability degradation. For more information, [please check out our paper](https://arxiv.org/abs/2406.04313).
<p align="center">
<img src="https://github.com/GraySwanAI/circuit-breakers/raw/main/assets/mistral_splash.png" width="800"/>
</p>
|
matthieuzone/CHEVREbis | matthieuzone | "2024-05-20T18:43:49Z" | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2024-05-20T18:35:41Z" | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/CHEVREbis
<Gallery />
## Model description
These are matthieuzone/CHEVREbis LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/CHEVREbis/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
HanningZhang/Llama3-sft-gsm8k-c2c50K-w2c48K-c200K-2ep | HanningZhang | "2025-01-15T00:02:44Z" | 51 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-14T21:53:03Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF | mradermacher | "2024-10-11T18:49:17Z" | 90 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:DavidAU/MN-Dark-Planet-TITAN-12B",
"base_model:quantized:DavidAU/MN-Dark-Planet-TITAN-12B",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2024-10-09T11:44:09Z" | ---
base_model: DavidAU/MN-Dark-Planet-TITAN-12B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/DavidAU/MN-Dark-Planet-TITAN-12B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/MN-Dark-Planet-TITAN-12B-i1-GGUF/resolve/main/MN-Dark-Planet-TITAN-12B.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
mradermacher/vicuna-7b-v1.5-uncensored-GGUF | mradermacher | "2024-07-09T22:20:42Z" | 99 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:jdqqjr/vicuna-7b-v1.5-uncensored",
"base_model:quantized:jdqqjr/vicuna-7b-v1.5-uncensored",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-07-09T11:03:34Z" | ---
base_model: jdqqjr/vicuna-7b-v1.5-uncensored
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/jdqqjr/vicuna-7b-v1.5-uncensored
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/vicuna-7b-v1.5-uncensored-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.5-uncensored-GGUF/resolve/main/vicuna-7b-v1.5-uncensored.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.5-uncensored-GGUF/resolve/main/vicuna-7b-v1.5-uncensored.IQ3_XS.gguf) | IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.5-uncensored-GGUF/resolve/main/vicuna-7b-v1.5-uncensored.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.5-uncensored-GGUF/resolve/main/vicuna-7b-v1.5-uncensored.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.5-uncensored-GGUF/resolve/main/vicuna-7b-v1.5-uncensored.IQ3_M.gguf) | IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.5-uncensored-GGUF/resolve/main/vicuna-7b-v1.5-uncensored.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.5-uncensored-GGUF/resolve/main/vicuna-7b-v1.5-uncensored.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.5-uncensored-GGUF/resolve/main/vicuna-7b-v1.5-uncensored.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.5-uncensored-GGUF/resolve/main/vicuna-7b-v1.5-uncensored.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.5-uncensored-GGUF/resolve/main/vicuna-7b-v1.5-uncensored.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.5-uncensored-GGUF/resolve/main/vicuna-7b-v1.5-uncensored.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.5-uncensored-GGUF/resolve/main/vicuna-7b-v1.5-uncensored.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.5-uncensored-GGUF/resolve/main/vicuna-7b-v1.5-uncensored.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.5-uncensored-GGUF/resolve/main/vicuna-7b-v1.5-uncensored.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/vicuna-7b-v1.5-uncensored-GGUF/resolve/main/vicuna-7b-v1.5-uncensored.f16.gguf) | f16 | 13.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Arise2003/Florence-new-ft-11 | Arise2003 | "2025-04-15T11:38:26Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-15T11:38:24Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MehdiHosseiniMoghadam/AVA-Mixtral-4x7B | MehdiHosseiniMoghadam | "2024-02-27T23:57:09Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-02-27T00:13:12Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nitinbhayana/alpacha-adapter | nitinbhayana | "2023-09-13T05:22:30Z" | 2 | 1 | peft | [
"peft",
"region:us"
] | null | "2023-09-12T19:41:32Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.6.0.dev0
|
KalyanRamM/w2v-bert-2.0-kannada-colab-CV16.0 | KalyanRamM | "2024-11-10T08:03:57Z" | 19 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-11-07T17:48:41Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hkivancoral/hushem_1x_deit_tiny_rms_lr00001_fold5 | hkivancoral | "2023-11-10T19:42:06Z" | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-tiny-patch16-224",
"base_model:finetune:facebook/deit-tiny-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-11-10T19:39:22Z" | ---
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: hushem_1x_deit_tiny_rms_lr00001_fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.7073170731707317
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hushem_1x_deit_tiny_rms_lr00001_fold5
This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8230
- Accuracy: 0.7073
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 6 | 1.1737 | 0.4634 |
| 1.1816 | 2.0 | 12 | 0.8675 | 0.5366 |
| 1.1816 | 3.0 | 18 | 0.8079 | 0.6341 |
| 0.5246 | 4.0 | 24 | 0.8632 | 0.5854 |
| 0.2225 | 5.0 | 30 | 0.7815 | 0.5610 |
| 0.2225 | 6.0 | 36 | 0.6787 | 0.6585 |
| 0.0792 | 7.0 | 42 | 0.7052 | 0.6585 |
| 0.0792 | 8.0 | 48 | 0.7120 | 0.6341 |
| 0.029 | 9.0 | 54 | 0.8373 | 0.6585 |
| 0.0096 | 10.0 | 60 | 0.6713 | 0.7317 |
| 0.0096 | 11.0 | 66 | 0.7185 | 0.7073 |
| 0.0045 | 12.0 | 72 | 0.7237 | 0.6829 |
| 0.0045 | 13.0 | 78 | 0.7062 | 0.6829 |
| 0.0033 | 14.0 | 84 | 0.7203 | 0.7073 |
| 0.0025 | 15.0 | 90 | 0.7207 | 0.7073 |
| 0.0025 | 16.0 | 96 | 0.7400 | 0.7073 |
| 0.002 | 17.0 | 102 | 0.7337 | 0.6829 |
| 0.002 | 18.0 | 108 | 0.7527 | 0.6829 |
| 0.0017 | 19.0 | 114 | 0.7553 | 0.6829 |
| 0.0015 | 20.0 | 120 | 0.7631 | 0.6829 |
| 0.0015 | 21.0 | 126 | 0.7684 | 0.6829 |
| 0.0014 | 22.0 | 132 | 0.7730 | 0.6829 |
| 0.0014 | 23.0 | 138 | 0.7803 | 0.6829 |
| 0.0012 | 24.0 | 144 | 0.7869 | 0.6829 |
| 0.0011 | 25.0 | 150 | 0.7854 | 0.6829 |
| 0.0011 | 26.0 | 156 | 0.7958 | 0.6829 |
| 0.001 | 27.0 | 162 | 0.7899 | 0.6829 |
| 0.001 | 28.0 | 168 | 0.7956 | 0.6829 |
| 0.001 | 29.0 | 174 | 0.8038 | 0.6829 |
| 0.0009 | 30.0 | 180 | 0.8059 | 0.6829 |
| 0.0009 | 31.0 | 186 | 0.8121 | 0.6829 |
| 0.0008 | 32.0 | 192 | 0.8137 | 0.6829 |
| 0.0008 | 33.0 | 198 | 0.8161 | 0.6829 |
| 0.0008 | 34.0 | 204 | 0.8136 | 0.6829 |
| 0.0008 | 35.0 | 210 | 0.8158 | 0.6829 |
| 0.0008 | 36.0 | 216 | 0.8175 | 0.7073 |
| 0.0007 | 37.0 | 222 | 0.8190 | 0.7073 |
| 0.0007 | 38.0 | 228 | 0.8213 | 0.7073 |
| 0.0007 | 39.0 | 234 | 0.8222 | 0.7073 |
| 0.0007 | 40.0 | 240 | 0.8227 | 0.7073 |
| 0.0007 | 41.0 | 246 | 0.8228 | 0.7073 |
| 0.0007 | 42.0 | 252 | 0.8230 | 0.7073 |
| 0.0007 | 43.0 | 258 | 0.8230 | 0.7073 |
| 0.0007 | 44.0 | 264 | 0.8230 | 0.7073 |
| 0.0007 | 45.0 | 270 | 0.8230 | 0.7073 |
| 0.0007 | 46.0 | 276 | 0.8230 | 0.7073 |
| 0.0007 | 47.0 | 282 | 0.8230 | 0.7073 |
| 0.0007 | 48.0 | 288 | 0.8230 | 0.7073 |
| 0.0007 | 49.0 | 294 | 0.8230 | 0.7073 |
| 0.0007 | 50.0 | 300 | 0.8230 | 0.7073 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.12-gguf | RichardErkhov | "2024-07-28T01:32:53Z" | 6 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-07-27T19:20:43Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
DataVortexS-10.7B-dpo-v1.12 - GGUF
- Model creator: https://huggingface.co/Edentns/
- Original model: https://huggingface.co/Edentns/DataVortexS-10.7B-dpo-v1.12/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [DataVortexS-10.7B-dpo-v1.12.Q2_K.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.12-gguf/blob/main/DataVortexS-10.7B-dpo-v1.12.Q2_K.gguf) | Q2_K | 3.73GB |
| [DataVortexS-10.7B-dpo-v1.12.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.12-gguf/blob/main/DataVortexS-10.7B-dpo-v1.12.IQ3_XS.gguf) | IQ3_XS | 4.14GB |
| [DataVortexS-10.7B-dpo-v1.12.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.12-gguf/blob/main/DataVortexS-10.7B-dpo-v1.12.IQ3_S.gguf) | IQ3_S | 4.37GB |
| [DataVortexS-10.7B-dpo-v1.12.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.12-gguf/blob/main/DataVortexS-10.7B-dpo-v1.12.Q3_K_S.gguf) | Q3_K_S | 4.34GB |
| [DataVortexS-10.7B-dpo-v1.12.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.12-gguf/blob/main/DataVortexS-10.7B-dpo-v1.12.IQ3_M.gguf) | IQ3_M | 4.51GB |
| [DataVortexS-10.7B-dpo-v1.12.Q3_K.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.12-gguf/blob/main/DataVortexS-10.7B-dpo-v1.12.Q3_K.gguf) | Q3_K | 4.84GB |
| [DataVortexS-10.7B-dpo-v1.12.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.12-gguf/blob/main/DataVortexS-10.7B-dpo-v1.12.Q3_K_M.gguf) | Q3_K_M | 4.84GB |
| [DataVortexS-10.7B-dpo-v1.12.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.12-gguf/blob/main/DataVortexS-10.7B-dpo-v1.12.Q3_K_L.gguf) | Q3_K_L | 5.26GB |
| [DataVortexS-10.7B-dpo-v1.12.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.12-gguf/blob/main/DataVortexS-10.7B-dpo-v1.12.IQ4_XS.gguf) | IQ4_XS | 5.43GB |
| [DataVortexS-10.7B-dpo-v1.12.Q4_0.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.12-gguf/blob/main/DataVortexS-10.7B-dpo-v1.12.Q4_0.gguf) | Q4_0 | 5.66GB |
| [DataVortexS-10.7B-dpo-v1.12.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.12-gguf/blob/main/DataVortexS-10.7B-dpo-v1.12.IQ4_NL.gguf) | IQ4_NL | 5.72GB |
| [DataVortexS-10.7B-dpo-v1.12.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.12-gguf/blob/main/DataVortexS-10.7B-dpo-v1.12.Q4_K_S.gguf) | Q4_K_S | 5.7GB |
| [DataVortexS-10.7B-dpo-v1.12.Q4_K.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.12-gguf/blob/main/DataVortexS-10.7B-dpo-v1.12.Q4_K.gguf) | Q4_K | 6.02GB |
| [DataVortexS-10.7B-dpo-v1.12.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.12-gguf/blob/main/DataVortexS-10.7B-dpo-v1.12.Q4_K_M.gguf) | Q4_K_M | 6.02GB |
| [DataVortexS-10.7B-dpo-v1.12.Q4_1.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.12-gguf/blob/main/DataVortexS-10.7B-dpo-v1.12.Q4_1.gguf) | Q4_1 | 6.27GB |
| [DataVortexS-10.7B-dpo-v1.12.Q5_0.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.12-gguf/blob/main/DataVortexS-10.7B-dpo-v1.12.Q5_0.gguf) | Q5_0 | 6.89GB |
| [DataVortexS-10.7B-dpo-v1.12.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.12-gguf/blob/main/DataVortexS-10.7B-dpo-v1.12.Q5_K_S.gguf) | Q5_K_S | 6.89GB |
| [DataVortexS-10.7B-dpo-v1.12.Q5_K.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.12-gguf/blob/main/DataVortexS-10.7B-dpo-v1.12.Q5_K.gguf) | Q5_K | 7.08GB |
| [DataVortexS-10.7B-dpo-v1.12.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.12-gguf/blob/main/DataVortexS-10.7B-dpo-v1.12.Q5_K_M.gguf) | Q5_K_M | 7.08GB |
| [DataVortexS-10.7B-dpo-v1.12.Q5_1.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.12-gguf/blob/main/DataVortexS-10.7B-dpo-v1.12.Q5_1.gguf) | Q5_1 | 7.51GB |
| [DataVortexS-10.7B-dpo-v1.12.Q6_K.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.12-gguf/blob/main/DataVortexS-10.7B-dpo-v1.12.Q6_K.gguf) | Q6_K | 8.2GB |
| [DataVortexS-10.7B-dpo-v1.12.Q8_0.gguf](https://huggingface.co/RichardErkhov/Edentns_-_DataVortexS-10.7B-dpo-v1.12-gguf/blob/main/DataVortexS-10.7B-dpo-v1.12.Q8_0.gguf) | Q8_0 | 10.62GB |
Original model description:
---
tags:
- text-generation
license: cc-by-nc-sa-4.0
language:
- ko
base_model: megastudy/M-SOLAR-10.7B-v1.3
pipeline_tag: text-generation
---
# **DataVortexS-10.7B-dpo-v1.12**
<img src="./DataVortex.png" alt="DataVortex" style="height: 8em;">
## Our Team
| Research & Engineering | Product Management |
| :--------------------: | :----------------: |
| Kwangseok Yang | Seunghyun Choi |
| Jeongwon Choi | Hyoseok Choi |
## **Model Details**
### **Base Model**
[megastudy/M-SOLAR-10.7B-v1.3](https://huggingface.co/megastudy/M-SOLAR-10.7B-v1.3)
### **Trained On**
- **OS**: Ubuntu 22.04
- **GPU**: H100 80GB 4ea
- **transformers**: v4.36.2
### **Instruction format**
It follows **Alpaca (Chat)** format.
E.g.
```python
text = """\
### System:
당신은 사람들이 정보를 찾을 수 있도록 도와주는 인공지능 비서입니다.
### User:
대한민국의 수도는 어디야?
### Assistant:
대한민국의 수도는 서울입니다.
### User:
서울 인구는 총 몇 명이야?
"""
```
## **Model Benchmark**
### **[Ko LM Eval Harness](https://github.com/Beomi/ko-lm-evaluation-harness)**
| Task | 0-shot | 5-shot | 10-shot | 50-shot |
| :--------------- | -----------: | ----------: | -----------: | -----------: |
| kobest_boolq | 0.895272 | 0.93443 | 0.938023 | 0.940851 |
| kobest_copa | 0.735618 | 0.778902 | 0.790925 | 0.809938 |
| kobest_hellaswag | 0.490442 | 0.481539 | 0.478118 | 0.494714 |
| kobest_sentineg | 0.782981 | 0.95213 | 0.952136 | 0.947082 |
| **Average** | **0.726078** | **0.78675** | **0.789801** | **0.798146** |
### **[Ko-LLM-Leaderboard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard)**
| Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| ------: | -----: | -----------: | ------: | ------------: | --------------: |
| 57.61 | 54.44 | 67.21 | 54.09 | 61.88 | 50.41 |
## **Implementation Code**
This model contains the chat_template instruction format.
You can use the code below.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.12")
tokenizer = AutoTokenizer.from_pretrained("Edentns/DataVortexS-10.7B-dpo-v1.12")
messages = [
{"role": "system", "content": "당신은 사람들이 정보를 찾을 수 있도록 도와주는 인공지능 비서입니다."},
{"role": "user", "content": "대한민국의 수도는 어디야?"},
{"role": "assistant", "content": "대한민국의 수도는 서울입니다."},
{"role": "user", "content": "서울 인구는 총 몇 명이야?"}
]
encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
```
## **License**
The model is licensed under the [cc-by-nc-sa-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license, which allows others to copy, modify, and share the work non-commercially, as long as they give appropriate credit and distribute any derivative works under the same license.
<div align="center">
<a href="https://edentns.com/">
<img src="./Logo.png" alt="Logo" style="height: 3em;">
</a>
</div>
|
LoneStriker/Sheared-LLaMA-2.7B-ShareGPT-6.0bpw-h6-exl2 | LoneStriker | "2023-12-31T07:29:13Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"arxiv:2310.06694",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-31T07:28:12Z" | ---
license: apache-2.0
---
**Paper**: [https://arxiv.org/pdf/2310.06694.pdf](https://arxiv.org/pdf/2310.06694.pdf)
**Code**: https://github.com/princeton-nlp/LLM-Shearing
**Models**: [Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B), [Sheared-LLaMA-2.7B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B)
## Training information
This is the instruction tuned version of [princeton-nlp/Sheared-LLaMA-2.7B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-2.7B). We trained the base model on 10,000 instruction-response pairs
sampled from the ShareGPT dataset (first-turns only). We use the following prompt to perform instruction tuning.
> You are a helpful assistant. Write a response that appropriately completes the request.\n\n### Input:\n{input}\n\n### Response:
This model can be loaded through transformers.LlamaModelForCausalLM as follows:
```
from transformers import LlamaModelForCausalLM
model = LlamaModelForCausalLM.from_pretrained("princeton-nlp/Sheared-LLaMA-1.3B-ShareGPT")
```
## Bibtex
If you find our model useful, consider citing us with:
```
@article{xia2023sheared,
title={Sheared llama: Accelerating language model pre-training via structured pruning},
author={Xia, Mengzhou and Gao, Tianyu and Zeng, Zhiyuan and Chen, Danqi},
journal={arXiv preprint arXiv:2310.06694},
year={2023}
}
```
|
chauhoang/69059d09-0bf5-44b6-81d4-cb39fb391576 | chauhoang | "2025-01-25T04:07:22Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:fxmarty/really-tiny-falcon-testing",
"base_model:adapter:fxmarty/really-tiny-falcon-testing",
"license:mit",
"region:us"
] | null | "2025-01-25T04:05:25Z" | ---
library_name: peft
license: mit
base_model: fxmarty/really-tiny-falcon-testing
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 69059d09-0bf5-44b6-81d4-cb39fb391576
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/really-tiny-falcon-testing
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- adb8f10c3929bd22_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/adb8f10c3929bd22_train_data.json
type:
field_input: Category Label
field_instruction: Product Title
field_output: Cluster Label
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: chauhoang/69059d09-0bf5-44b6-81d4-cb39fb391576
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/adb8f10c3929bd22_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1ea7c675-4778-48c0-b140-f57c55fbe73a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1ea7c675-4778-48c0-b140-f57c55fbe73a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 69059d09-0bf5-44b6-81d4-cb39fb391576
This model is a fine-tuned version of [fxmarty/really-tiny-falcon-testing](https://huggingface.co/fxmarty/really-tiny-falcon-testing) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.0902
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 11.0933 |
| 44.386 | 0.0024 | 10 | 11.0929 |
| 44.3565 | 0.0048 | 20 | 11.0916 |
| 44.386 | 0.0072 | 30 | 11.0907 |
| 44.3584 | 0.0096 | 40 | 11.0903 |
| 44.433 | 0.0120 | 50 | 11.0902 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
LarryAIDraw/leone-10 | LarryAIDraw | "2023-11-12T04:54:45Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-11-12T04:54:15Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/145400/leone-akame-ga-kill |
Guizmus/BloodborneDiffusion | Guizmus | "2023-05-16T09:25:45Z" | 73 | 25 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"text-to-image",
"image-to-image",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2022-11-06T21:57:19Z" | ---
language:
- en
license: creativeml-openrail-m
thumbnail: "https://huggingface.co/Guizmus/BloodborneDiffusion/resolve/main/bloodbornestyle_showcase.jpg"
tags:
- stable-diffusion
- text-to-image
- image-to-image
inference: true
---
# Bloodborne Diffusion
<p>
<img src="https://huggingface.co/Guizmus/BloodborneDiffusion/resolve/main/bloodbornestyle_showcase.jpg"/><br/>
This is a Dreamboothed Stable Diffusion model trained on the Bloodborne series Style.<br/>
The total dataset is made of 100 pictures, and the training has been done on runawayml 1.5 and the new VAE, with 12k steps (poly LR1e-6).<br/>
The token "Bloodborne Style" will bring in the new concept.<br/>
The recommended sampling is k_Euler_a or DPM++ 2M Karras on 20 steps, CFGS 7 .
</p>
[CKPT download link](https://huggingface.co/Guizmus/Bloodborne/resolve/main/BloodborneStyle-v1-1.ckpt)
## 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "Guizmus/BloodborneDiffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a red moon, Bloodborne Style"
image = pipe(prompt).images[0]
image.save("./BloodborneStyle.png")
```
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) |
nadam54321/Carlos-Chatbot-Finetuned | nadam54321 | "2025-01-09T10:17:46Z" | 35 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-09T10:15:24Z" | ---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** nadam54321
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
HUMADEX/portugese_medical_ner | HUMADEX | "2024-10-11T12:23:11Z" | 20 | 0 | null | [
"pytorch",
"bert",
"NER",
"medical",
"symptoms",
"extraction",
"portugese",
"token-classification",
"pt",
"dataset:HUMADEX/portugese_ner_dataset",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"region:us"
] | token-classification | "2024-10-10T13:05:35Z" | ---
license: apache-2.0
datasets:
- HUMADEX/portugese_ner_dataset
language:
- pt
metrics:
- f1
- precision
- recall
- confusion_matrix
base_model:
- google-bert/bert-base-cased
pipeline_tag: token-classification
tags:
- NER
- medical
- symptoms
- extraction
- portugese
---
# Portugese Medical NER
## Acknowledgement
This model had been created as part of joint research of HUMADEX research group (https://www.linkedin.com/company/101563689/) and has received funding by the European Union Horizon Europe Research and Innovation Program project SMILE (grant number 101080923) and Marie Skłodowska-Curie Actions (MSCA) Doctoral Networks, project BosomShield ((rant number 101073222). Responsibility for the information and views expressed herein lies entirely with the authors.
Authors:
dr. Izidor Mlakar, Rigon Sallauka, dr. Umut Arioz, dr. Matej Rojc
## Use
- **Primary Use Case**: This model is designed to extract medical entities such as symptoms, diagnostic tests, and treatments from clinical text in the Portugese language.
- **Applications**: Suitable for healthcare professionals, clinical data analysis, and research into medical text processing.
- **Supported Entity Types**:
- `PROBLEM`: Diseases, symptoms, and medical conditions.
- `TEST`: Diagnostic procedures and laboratory tests.
- `TREATMENT`: Medications, therapies, and other medical interventions.
## Training Data
- **Data Sources**: Annotated datasets, including clinical data and translations of English medical text into Portugese.
- **Data Augmentation**: The training dataset underwent data augmentation techniques to improve the model's ability to generalize to different text structures.
- **Dataset Split**:
- **Training Set**: 80%
- **Validation Set**: 10%
- **Test Set**: 10%
## Model Training
- **Training Configuration**:
- **Optimizer**: AdamW
- **Learning Rate**: 3e-5
- **Batch Size**: 64
- **Epochs**: 200
- **Loss Function**: Focal Loss to handle class imbalance
- **Frameworks**: PyTorch, Hugging Face Transformers, SimpleTransformers
## Evaluation metrics
- eval_loss = 0.34290624315439794
- f1_score = 0.7720704622812219
- precision = 0.7724936121316581
- recall = 0.7716477757556993
Visit [HUMADEX/Weekly-Supervised-NER-pipline](https://github.com/HUMADEX/Weekly-Supervised-NER-pipline) for more info.
## How to Use
You can easily use this model with the Hugging Face `transformers` library. Here's an example of how to load and use the model for inference:
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
model_name = "HUMADEX/portugese_medical_ner"
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForTokenClassification.from_pretrained(model_name)
# Sample text for inference
text = "O paciente reclamou de fortes dores de cabeça e náusea que persistiram por dois dias. Para aliviar os sintomas, foi prescrito paracetamol e recomendado descansar e beber bastante líquidos."
# Tokenize the input text
inputs = tokenizer(text, return_tensors="pt") |
tensorblock/T3Q-ko-solar-dpo-v1.0-GGUF | tensorblock | "2024-11-20T11:51:05Z" | 5 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:chihoonlee10/T3Q-ko-solar-dpo-v1.0",
"base_model:quantized:chihoonlee10/T3Q-ko-solar-dpo-v1.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-20T10:25:04Z" | ---
library_name: transformers
license: apache-2.0
base_model: chihoonlee10/T3Q-ko-solar-dpo-v1.0
tags:
- TensorBlock
- GGUF
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## chihoonlee10/T3Q-ko-solar-dpo-v1.0 - GGUF
This repo contains GGUF format model files for [chihoonlee10/T3Q-ko-solar-dpo-v1.0](https://huggingface.co/chihoonlee10/T3Q-ko-solar-dpo-v1.0).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
### System:
{system_prompt}
### User:
{prompt}
### Assistant:
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [T3Q-ko-solar-dpo-v1.0-Q2_K.gguf](https://huggingface.co/tensorblock/T3Q-ko-solar-dpo-v1.0-GGUF/blob/main/T3Q-ko-solar-dpo-v1.0-Q2_K.gguf) | Q2_K | 3.728 GB | smallest, significant quality loss - not recommended for most purposes |
| [T3Q-ko-solar-dpo-v1.0-Q3_K_S.gguf](https://huggingface.co/tensorblock/T3Q-ko-solar-dpo-v1.0-GGUF/blob/main/T3Q-ko-solar-dpo-v1.0-Q3_K_S.gguf) | Q3_K_S | 4.344 GB | very small, high quality loss |
| [T3Q-ko-solar-dpo-v1.0-Q3_K_M.gguf](https://huggingface.co/tensorblock/T3Q-ko-solar-dpo-v1.0-GGUF/blob/main/T3Q-ko-solar-dpo-v1.0-Q3_K_M.gguf) | Q3_K_M | 4.839 GB | very small, high quality loss |
| [T3Q-ko-solar-dpo-v1.0-Q3_K_L.gguf](https://huggingface.co/tensorblock/T3Q-ko-solar-dpo-v1.0-GGUF/blob/main/T3Q-ko-solar-dpo-v1.0-Q3_K_L.gguf) | Q3_K_L | 5.263 GB | small, substantial quality loss |
| [T3Q-ko-solar-dpo-v1.0-Q4_0.gguf](https://huggingface.co/tensorblock/T3Q-ko-solar-dpo-v1.0-GGUF/blob/main/T3Q-ko-solar-dpo-v1.0-Q4_0.gguf) | Q4_0 | 5.655 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [T3Q-ko-solar-dpo-v1.0-Q4_K_S.gguf](https://huggingface.co/tensorblock/T3Q-ko-solar-dpo-v1.0-GGUF/blob/main/T3Q-ko-solar-dpo-v1.0-Q4_K_S.gguf) | Q4_K_S | 5.698 GB | small, greater quality loss |
| [T3Q-ko-solar-dpo-v1.0-Q4_K_M.gguf](https://huggingface.co/tensorblock/T3Q-ko-solar-dpo-v1.0-GGUF/blob/main/T3Q-ko-solar-dpo-v1.0-Q4_K_M.gguf) | Q4_K_M | 6.018 GB | medium, balanced quality - recommended |
| [T3Q-ko-solar-dpo-v1.0-Q5_0.gguf](https://huggingface.co/tensorblock/T3Q-ko-solar-dpo-v1.0-GGUF/blob/main/T3Q-ko-solar-dpo-v1.0-Q5_0.gguf) | Q5_0 | 6.889 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [T3Q-ko-solar-dpo-v1.0-Q5_K_S.gguf](https://huggingface.co/tensorblock/T3Q-ko-solar-dpo-v1.0-GGUF/blob/main/T3Q-ko-solar-dpo-v1.0-Q5_K_S.gguf) | Q5_K_S | 6.889 GB | large, low quality loss - recommended |
| [T3Q-ko-solar-dpo-v1.0-Q5_K_M.gguf](https://huggingface.co/tensorblock/T3Q-ko-solar-dpo-v1.0-GGUF/blob/main/T3Q-ko-solar-dpo-v1.0-Q5_K_M.gguf) | Q5_K_M | 7.076 GB | large, very low quality loss - recommended |
| [T3Q-ko-solar-dpo-v1.0-Q6_K.gguf](https://huggingface.co/tensorblock/T3Q-ko-solar-dpo-v1.0-GGUF/blob/main/T3Q-ko-solar-dpo-v1.0-Q6_K.gguf) | Q6_K | 8.200 GB | very large, extremely low quality loss |
| [T3Q-ko-solar-dpo-v1.0-Q8_0.gguf](https://huggingface.co/tensorblock/T3Q-ko-solar-dpo-v1.0-GGUF/blob/main/T3Q-ko-solar-dpo-v1.0-Q8_0.gguf) | Q8_0 | 10.621 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/T3Q-ko-solar-dpo-v1.0-GGUF --include "T3Q-ko-solar-dpo-v1.0-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/T3Q-ko-solar-dpo-v1.0-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
LarryAIDraw/Fenrys_lv2kc_ | LarryAIDraw | "2024-05-22T20:28:54Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2024-05-22T20:17:26Z" | ---
license: creativeml-openrail-m
---
https://civitai.com/models/410616/lv2fenrys-chillin-different-world-life-of-the-ex-brave-candidate-was-cheat-from-lv2 |
muharamesa/trainMistral | muharamesa | "2024-04-25T06:40:22Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-04-24T04:30:41Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ChiSu001/SAT-HMR | ChiSu001 | "2025-02-18T07:53:51Z" | 5 | 2 | null | [
"Human Mesh Recovery",
"Human Pose and Shape Estimation",
"Multi-Person Mesh Recovery",
"arxiv:2411.19824",
"region:us"
] | null | "2025-01-13T14:37:16Z" | ---
tags:
- Human Mesh Recovery
- Human Pose and Shape Estimation
- Multi-Person Mesh Recovery
arxiv: '2411.19824'
---
# SAT-HMR
Offical [Pytorch](https://pytorch.org/) implementation of our paper:
<h3 align="center">SAT-HMR: Real-Time Multi-Person 3D Mesh Estimation via Scale-Adaptive Tokens</h3>
<h4 align="center" style="text-decoration: none;">
<a href="https://github.com/ChiSu001/", target="_blank"><b>Chi Su</b></a>
,
<a href="https://shirleymaxx.github.io/", target="_blank"><b>Xiaoxuan Ma</b></a>
,
<a href="https://scholar.google.com/citations?user=DoUvUz4AAAAJ&hl=en", target="_blank"><b>Jiajun Su</b></a>
,
<a href="https://cfcs.pku.edu.cn/english/people/faculty/yizhouwang/index.htm", target="_blank"><b>Yizhou Wang</b></a>
</h4>
<h3 align="center">
<a href="https://arxiv.org/abs/2411.19824", target="_blank">Paper</a> |
<a href="https://ChiSu001.github.io/SAT-HMR", target="_blank">Project Page</a> |
<a href="https://youtu.be/tqURcr_nCQY", target="_blank">Video</a> |
<a href="https://github.com/ChiSu001/SAT-HMR", target="_blank">GitHub</a>
</h3>
<!-- <div align="center">
<img src="figures/results.png" width="70%">
<img src="figures/results_3d.gif" width="29%">
</div> -->
<!-- <h3> Overview of SAT-HMR </h3> -->
<p align="center">
<img src="figures/pipeline.png"/>
</p>
<!-- <p align="center">
<img src="figures/pipeline.png" style="height: 300px; object-fit: cover;"/>
</p> -->
## Installation
We tested with python 3.11, PyTorch 2.4.1 and CUDA 12.1.
1. Create a conda environment.
```bash
conda create -n sathmr python=3.11 -y
conda activate sathmr
```
2. Install [PyTorch](https://pytorch.org/) and [xFormers](https://github.com/facebookresearch/xformers).
```bash
# Install PyTorch. It is recommended that you follow [official instruction](https://pytorch.org/) and adapt the cuda version to yours.
conda install pytorch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 pytorch-cuda=12.1 -c pytorch -c nvidia
# Install xFormers. It is recommended that you follow [official instruction](https://github.com/facebookresearch/xformers) and adapt the cuda version to yours.
pip install -U xformers==0.0.28.post1 --index-url https://download.pytorch.org/whl/cu121
```
3. Install other dependencies.
```bash
pip install -r requirements.txt
```
4. You may need to modify `chumpy` package to avoid errors. For detailed instructions, please check [this guidance](docs/fix_chumpy.md).
## Download Models & Weights
1. Download SMPL-related weights.
- Download `basicModel_f_lbs_10_207_0_v1.0.0.pkl`, `basicModel_m_lbs_10_207_0_v1.0.0.pkl`, and `basicModel_neutral_lbs_10_207_0_v1.0.0.pkl` from [here](https://smpl.is.tue.mpg.de/) (female & male) and [here](http://smplify.is.tue.mpg.de/) (neutral) to `${Project}/weights/smpl_data/smpl`. Please rename them as `SMPL_FEMALE.pkl`, `SMPL_MALE.pkl`, and `SMPL_NEUTRAL.pkl`, respectively.
- Download others from [Google drive](https://drive.google.com/drive/folders/1wmd_pjmmDn3eSl3TLgProgZgCQZgtZIC?usp=sharing) and put them to `${Project}/weights/smpl_data/smpl`.
2. Download DINOv2 pretrained weights from [their official repository](https://github.com/facebookresearch/dinov2?tab=readme-ov-file#pretrained-models). We use `ViT-B/14 distilled (without registers)`. Please put `dinov2_vitb14_pretrain.pth` to `${Project}/weights/dinov2`. These weights will be used to initialize our encoder. **You can skip this step if you are not going to train SAT-HMR.**
3. Download pretrained weights for inference and evaluation from [Google drive](https://drive.google.com/file/d/12tGbqcrJ8YACcrfi5qslZNEciIHxcScZ/view?usp=sharing) or [🤗HuggingFace](https://huggingface.co/ChiSu001/SAT-HMR/blob/main/weights/sat_hmr/sat_644.pth). Please put them to `${Project}/weights/sat_hmr`.
Now the `weights` directory structure should be like this.
```
${Project}
|-- weights
|-- dinov2
| `-- dinov2_vitb14_pretrain.pth
|-- sat_hmt
| `-- sat_644.pth
`-- smpl_data
`-- smpl
|-- body_verts_smpl.npy
|-- J_regressor_h36m_correct.npy
|-- SMPL_FEMALE.pkl
|-- SMPL_MALE.pkl
|-- smpl_mean_params.npz
`-- SMPL_NEUTRAL.pkl
```
## Inference on Images
<h4> Inference with 1 GPU</h4>
We provide some demo images in `${Project}/demo`. You can run SAT-HMR on all images on a single GPU via:
```bash
python main.py --mode infer --cfg demo
```
Results with overlayed meshes will be saved in `${Project}/demo_results`.
You can specify your own inference configuration by modifing `${Project}/configs/run/demo.yaml`:
- `input_dir` specifies the input image folder.
- `output_dir` specifies the output folder.
- `conf_thresh` specifies a list of confidence thresholds used for detection. SAT-HMR will run inference using thresholds in the list, respectively.
- `infer_batch_size` specifies the batch size used for inference (on a single GPU).
<h4> Inference with Multiple GPUs</h4>
You can also try distributed inference on multiple GPUs if your input folder contains a large number of images.
Since we use [🤗 Accelerate](https://huggingface.co/docs/accelerate/index) to launch our distributed configuration, first you may need to configure [🤗 Accelerate](https://huggingface.co/docs/accelerate/index) for how the current system is setup for distributed process. To do so run the following command and answer the questions prompted to you:
```bash
accelerate config
```
Then run:
```bash
accelerate launch main.py --mode infer --cfg demo
```
<!-- ## Datasets Preparation
Coming soon.
## Training and Evaluation
Coming soon. -->
## Citing
If you find this code useful for your research, please consider citing our paper:
```bibtex
@article{su2024sathmr,
title={SAT-HMR: Real-Time Multi-Person 3D Mesh Estimation via Scale-Adaptive Tokens},
author={Su, Chi and Ma, Xiaoxuan and Su, Jiajun and Wang, Yizhou},
journal={arXiv preprint arXiv:2411.19824},
year={2024}
}
```
## Acknowledgement
This repo is built on the excellent work [DINOv2](https://github.com/facebookresearch/dinov2), [DAB-DETR](https://github.com/IDEA-Research/DAB-DETR), [DINO](https://github.com/IDEA-Research/DINO) and [🤗 Accelerate](https://huggingface.co/docs/accelerate/index). Thanks for these great projects. |
budecosystem/code-millenials-1b | budecosystem | "2025-01-06T06:39:27Z" | 276 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"phi",
"text-generation",
"code",
"custom_code",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-12-13T19:29:13Z" | ---
library_name: transformers
tags:
- code
---
# Bud Code Millenials 1B
Welcome to our Code Model repository! Our model is specifically fine-tuned for code generation tasks. Bud Millenial Code Gen open-source models are currently the State of the Art (SOTA) for code generation, beating all the existing models of all sizes. We have achieved a HumanEval value of 80.48 @ Pass 1, beating proprietary models like Gemini Ultra, Claude, GPT-3.5 etc. by a large margin, and on par with GPT-4 (HumanEval ~ 82. Ref. WizardCoder). Our proprietary model (Bud Code Jr) beats GPT-4 as well with a HumanEval value of 88.2 & a context size of 168K, we will be releasing an API for Researchers, Enterprises, and potential Partners by January 2024 end. If interested, please reach out to [email protected]
### News 🔥🔥🔥
- [2024/01/09] We released **Code Millenials 3B** , which achieves the **56.09 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/09] We released **Code Millenials 1B** , which achieves the **51.82 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/03] We released **Code Millenials 34B** , which achieves the **80.48 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
- [2024/01/02] We released **Code Millenials 13B** , which achieves the **76.21 pass@1** on the [HumanEval Benchmarks](https://github.com/openai/human-eval).
### HumanEval
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/BudEcosystem/code-millenials/main/assets/result.png" alt="CodeMillenials" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
<p align="center" width="100%">
<a ><img src="https://raw.githubusercontent.com/BudEcosystem/code-millenials/main/assets/result-3b.png" alt="CodeMillenials" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a>
</p>
For the millenial models, the eval script in the github repo is used for the above result.
Note: The humaneval values of other models are taken from the official repos of [WizardCoder](https://github.com/nlpxucan/WizardLM), [DeepseekCoder](https://github.com/deepseek-ai/deepseek-coder), [Gemini](https://deepmind.google/technologies/gemini/#capabilities) etc.
### Models
| Model | Checkpoint | HumanEval (+) | MBPP (+) |
|---------|-------------|---------------|----------|
|Code Millenials 34B | <a href="https://huggingface.co/budecosystem/code-millenials-34b" target="_blank">HF Link</a> | 80.48 (75) | 74.68 (62.9) |
|Code Millenials 13B | <a href="https://huggingface.co/budecosystem/code-millenials-13b" target="_blank">HF Link</a> | 76.21 (69.5) | 70.17 (57.6) |
|Code Millenials 3B | <a href="https://huggingface.co/budecosystem/code-millenials-3b" target="_blank">HF Link</a> | 56.09 (52.43) | 55.13 (47.11) |
|Code Millenials 1B | <a href="https://huggingface.co/budecosystem/code-millenials-1b" target="_blank">HF Link</a> | 51.82 (48.17) | 53.13 (44.61) |
### 🚀 Quick Start
Inference code using the pre-trained model from the Hugging Face model hub
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("budecosystem/code-millenials-1b")
model = AutoModelForCausalLM.from_pretrained("budecosystem/code-millenials-1b")
template = """A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
### Instruction: {instruction} ### Response:"""
instruction = <Your code instruction here>
prompt = template.format(instruction=instruction)
inputs = tokenizer(prompt, return_tensors="pt")
sample = model.generate(**inputs, max_length=128)
print(tokenizer.decode(sample[0]))
```
## Training details
The model is trained of 8 A100 80GB for approximately 6hrs.
| Hyperparameters | Value |
| :----------------------------| :-----: |
| per_device_train_batch_size | 6 |
| gradient_accumulation_steps | 1 |
| epoch | 3 |
| steps | 11502 |
| learning_rate | 2e-5 |
| lr schedular type | cosine |
| warmup ratio | 0.1 |
| optimizer | adamw |
| fp16 | True |
| GPU | 8 A100 80GB |
### Important Note
- **Bias, Risks, and Limitations:** Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding. |
sd-concepts-library/xidiversity | sd-concepts-library | "2022-11-12T03:24:28Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2022-11-12T03:24:18Z" | ---
license: mit
---
### xidiversity on Stable Diffusion
This is the `<JinpingXi>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:




|
s3nh/togethercomputer-LLaMA-2-7B-32K-open-Orca-v1-GGML | s3nh | "2023-07-31T07:15:35Z" | 0 | 9 | transformers | [
"transformers",
"text-generation-inference",
"text-generation",
"en",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-07-31T06:50:33Z" | ---
language:
- en
tags:
- text-generation-inference
pipeline_tag: text-generation
library_name: transformers
---
## Original model card
Buy me a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGML Format model files for [This project](https://huggingface.co/NickyNicky/togethercomputer-LLaMA-2-7B-32K-open-Orca-v1).
### inference
```python
import ctransformers
from ctransformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(output_dir, ggml_file,
gpu_layers=32, model_type="llama")
manual_input: str = "Tell me about your last dream, please."
llm(manual_input,
max_new_tokens=256,
temperature=0.9,
top_p= 0.7)
```
# Original model card |
isabelofespana/northshore | isabelofespana | "2025-03-22T18:41:36Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-02-21T01:01:49Z" | ---
license: apache-2.0
---
|
Zoyd/TIGER-Lab_MAmmoTH2-7B-8_0bpw_exl2 | Zoyd | "2024-05-21T13:25:17Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"arxiv:2405.03548",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"exl2",
"region:us"
] | text-generation | "2024-05-21T13:20:16Z" | ---
license: mit
language:
- en
---
**Exllamav2** quant (**exl2** / **8.0 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-2_2bpw_exl2)**</center> | <center>2200 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-2_5bpw_exl2)**</center> | <center>2429 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-3_0bpw_exl2)**</center> | <center>2843 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-3_5bpw_exl2)**</center> | <center>3260 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-3_75bpw_exl2)**</center> | <center>3469 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-4_0bpw_exl2)**</center> | <center>3677 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-4_25bpw_exl2)**</center> | <center>3885 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-5_0bpw_exl2)**</center> | <center>4504 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-6_0bpw_exl2)**</center> | <center>5366 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-6_5bpw_exl2)**</center> | <center>5778 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/TIGER-Lab_MAmmoTH2-7B-8_0bpw_exl2)**</center> | <center>6690 MB</center> | <center>8</center> |
# 🦣 MAmmoTH2: Scaling Instructions from the Web
Project Page: [https://tiger-ai-lab.github.io/MAmmoTH2/](https://tiger-ai-lab.github.io/MAmmoTH2/)
Paper: [https://arxiv.org/pdf/2405.03548](https://arxiv.org/pdf/2405.03548)
Code: [https://github.com/TIGER-AI-Lab/MAmmoTH2](https://github.com/TIGER-AI-Lab/MAmmoTH2)
## Introduction
Introducing 🦣 MAmmoTH2, a game-changer in improving the reasoning abilities of large language models (LLMs) through innovative instruction tuning. By efficiently harvesting 10 million instruction-response pairs from the pre-training web corpus, we've developed MAmmoTH2 models that significantly boost performance on reasoning benchmarks. For instance, MAmmoTH2-7B (Mistral) sees its performance soar from 11% to 34% on MATH and from 36% to 67% on GSM8K, all without training on any domain-specific data. Further training on public instruction tuning datasets yields MAmmoTH2-Plus, setting new standards in reasoning and chatbot benchmarks. Our work presents a cost-effective approach to acquiring large-scale, high-quality instruction data, offering a fresh perspective on enhancing LLM reasoning abilities.
| | **Base Model** | **MAmmoTH2** | **MAmmoTH2-Plus** |
|:-----|:---------------------|:-------------------------------------------------------------------|:------------------------------------------------------------------|
| 7B | Mistral | 🦣 [MAmmoTH2-7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B) | 🦣 [MAmmoTH2-7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-7B-Plus) |
| 8B | Llama-3 | 🦣 [MAmmoTH2-8B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B) | 🦣 [MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus) |
| 8x7B | Mixtral | 🦣 [MAmmoTH2-8x7B](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B) | 🦣 [MAmmoTH2-8x7B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8x7B-Plus) |
## Training Data
Please refer to https://huggingface.co/datasets/TIGER-Lab/WebInstructSub for more details.

## Training Procedure
The models are fine-tuned with the WEBINSTRUCT dataset using the original Llama-3, Mistral and Mistal models as base models. The training procedure varies for different models based on their sizes. Check out our paper for more details.
## Evaluation
The models are evaluated using open-ended and multiple-choice math problems from several datasets. Here are the results:
| **Model** | **TheoremQA** | **MATH** | **GSM8K** | **GPQA** | **MMLU-ST** | **BBH** | **ARC-C** | **Avg** |
|:---------------------------------------|:--------------|:---------|:----------|:---------|:------------|:--------|:----------|:--------|
| **MAmmoTH2-7B** (Updated) | 29.0 | 36.7 | 68.4 | 32.4 | 62.4 | 58.6 | 81.7 | 52.7 |
| **MAmmoTH2-8B** (Updated) | 30.3 | 35.8 | 70.4 | 35.2 | 64.2 | 62.1 | 82.2 | 54.3 |
| **MAmmoTH2-8x7B** | 32.2 | 39.0 | 75.4 | 36.8 | 67.4 | 71.1 | 87.5 | 58.9 |
| **MAmmoTH2-7B-Plus** (Updated) | 31.2 | 46.0 | 84.6 | 33.8 | 63.8 | 63.3 | 84.4 | 58.1 |
| **MAmmoTH2-8B-Plus** (Updated) | 31.5 | 43.0 | 85.2 | 35.8 | 66.7 | 69.7 | 84.3 | 59.4 |
| **MAmmoTH2-8x7B-Plus** | 34.1 | 47.0 | 86.4 | 37.8 | 72.4 | 74.1 | 88.4 | 62.9 |
To reproduce our results, please refer to https://github.com/TIGER-AI-Lab/MAmmoTH2/tree/main/math_eval.
## Usage
You can use the models through Huggingface's Transformers library. Use the pipeline function to create a text-generation pipeline with the model of your choice, then feed in a math problem to get the solution.
Check our Github repo for more advanced use: https://github.com/TIGER-AI-Lab/MAmmoTH2
## Limitations
We've tried our best to build math generalist models. However, we acknowledge that the models' performance may vary based on the complexity and specifics of the math problem. Still not all mathematical fields can be covered comprehensively.
## Citation
If you use the models, data, or code from this project, please cite the original paper:
```
@article{yue2024mammoth2,
title={MAmmoTH2: Scaling Instructions from the Web},
author={Yue, Xiang and Zheng, Tuney and Zhang, Ge and Chen, Wenhu},
journal={arXiv preprint arXiv:2405.03548},
year={2024}
}
``` |
msproper/PR6 | msproper | "2023-06-21T04:36:55Z" | 6 | 0 | tf-keras | [
"tf-keras",
"region:us"
] | null | "2023-06-20T18:07:32Z" | Дан датасет fashion_mnist и обученная нейронная сеть.
Использовал их для генерации изображения похожего на предмет из набора fashion_mnist .
Веса нейронной сети данной по заданию не должны быть изменены в процессе дообучения.
Оптимизатор использовал Adam, потери - среднеквадратичное
Total params: 54,699

./
./
 |
Vamsi/T5_Paraphrase_Paws | Vamsi | "2023-06-12T06:31:04Z" | 7,651 | 37 | transformers | [
"transformers",
"pytorch",
"tf",
"jax",
"safetensors",
"t5",
"text2text-generation",
"paraphrase-generation",
"text-generation",
"Conditional Generation",
"en",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: "en"
tags:
- paraphrase-generation
- text-generation
- Conditional Generation
inference: false
---
# Paraphrase-Generation
## Model description
T5 Model for generating paraphrases of english sentences. Trained on the [Google PAWS](https://github.com/google-research-datasets/paws) dataset.
## How to use
## Requires sentencepiece: # !pip install sentencepiece
PyTorch and TF models available
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Vamsi/T5_Paraphrase_Paws")
model = AutoModelForSeq2SeqLM.from_pretrained("Vamsi/T5_Paraphrase_Paws").to('cuda')
sentence = "This is something which i cannot understand at all"
text = "paraphrase: " + sentence + " </s>"
encoding = tokenizer.encode_plus(text,pad_to_max_length=True, return_tensors="pt")
input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda")
outputs = model.generate(
input_ids=input_ids, attention_mask=attention_masks,
max_length=256,
do_sample=True,
top_k=120,
top_p=0.95,
early_stopping=True,
num_return_sequences=5
)
for output in outputs:
line = tokenizer.decode(output, skip_special_tokens=True,clean_up_tokenization_spaces=True)
print(line)
```
For more reference on training your own T5 model or using this model, do check out [Paraphrase Generation](https://github.com/Vamsi995/Paraphrase-Generator).
|
craa/100M_low_100_495 | craa | "2024-12-17T11:59:08Z" | 22 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-08T21:43:25Z" | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: 100M_low_100_495
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 100M_low_100_495
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3016
- Accuracy: 0.3945
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0006
- train_batch_size: 32
- eval_batch_size: 16
- seed: 495
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.98) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:------:|:-----:|:---------------:|:--------:|
| 5.1008 | 0.1078 | 1000 | 5.0280 | 0.2264 |
| 4.5869 | 0.2156 | 2000 | 4.5307 | 0.2683 |
| 4.3409 | 0.3235 | 3000 | 4.2451 | 0.2975 |
| 4.1684 | 0.4313 | 4000 | 4.1010 | 0.3118 |
| 4.0566 | 0.5391 | 5000 | 4.0031 | 0.3204 |
| 4.0017 | 0.6469 | 6000 | 3.9256 | 0.3268 |
| 3.9334 | 0.7547 | 7000 | 3.8711 | 0.3327 |
| 3.8792 | 0.8625 | 8000 | 3.8260 | 0.3367 |
| 3.8496 | 0.9704 | 9000 | 3.7877 | 0.3406 |
| 3.7681 | 1.0782 | 10000 | 3.7565 | 0.3435 |
| 3.779 | 1.1860 | 11000 | 3.7306 | 0.3456 |
| 3.7435 | 1.2938 | 12000 | 3.7048 | 0.3484 |
| 3.7147 | 1.4016 | 13000 | 3.6820 | 0.3511 |
| 3.7031 | 1.5094 | 14000 | 3.6629 | 0.3528 |
| 3.6908 | 1.6173 | 15000 | 3.6419 | 0.3550 |
| 3.6659 | 1.7251 | 16000 | 3.6263 | 0.3567 |
| 3.6656 | 1.8329 | 17000 | 3.6137 | 0.3580 |
| 3.6417 | 1.9407 | 18000 | 3.5963 | 0.3592 |
| 3.5861 | 2.0485 | 19000 | 3.5879 | 0.3609 |
| 3.564 | 2.1563 | 20000 | 3.5797 | 0.3618 |
| 3.5519 | 2.2642 | 21000 | 3.5672 | 0.3630 |
| 3.5538 | 2.3720 | 22000 | 3.5582 | 0.3641 |
| 3.5472 | 2.4798 | 23000 | 3.5451 | 0.3652 |
| 3.5427 | 2.5876 | 24000 | 3.5375 | 0.3663 |
| 3.5516 | 2.6954 | 25000 | 3.5239 | 0.3672 |
| 3.5549 | 2.8032 | 26000 | 3.5184 | 0.3681 |
| 3.5393 | 2.9111 | 27000 | 3.5092 | 0.3692 |
| 3.4377 | 3.0189 | 28000 | 3.5062 | 0.3695 |
| 3.4569 | 3.1267 | 29000 | 3.5000 | 0.3705 |
| 3.4701 | 3.2345 | 30000 | 3.4936 | 0.3712 |
| 3.4586 | 3.3423 | 31000 | 3.4883 | 0.3722 |
| 3.4684 | 3.4501 | 32000 | 3.4816 | 0.3724 |
| 3.455 | 3.5580 | 33000 | 3.4763 | 0.3730 |
| 3.4791 | 3.6658 | 34000 | 3.4702 | 0.3737 |
| 3.4562 | 3.7736 | 35000 | 3.4633 | 0.3743 |
| 3.4399 | 3.8814 | 36000 | 3.4548 | 0.3751 |
| 3.4627 | 3.9892 | 37000 | 3.4513 | 0.3758 |
| 3.3778 | 4.0970 | 38000 | 3.4527 | 0.3764 |
| 3.3863 | 4.2049 | 39000 | 3.4468 | 0.3765 |
| 3.3899 | 4.3127 | 40000 | 3.4451 | 0.3769 |
| 3.4059 | 4.4205 | 41000 | 3.4387 | 0.3777 |
| 3.404 | 4.5283 | 42000 | 3.4332 | 0.3783 |
| 3.3956 | 4.6361 | 43000 | 3.4281 | 0.3785 |
| 3.3982 | 4.7439 | 44000 | 3.4217 | 0.3795 |
| 3.4048 | 4.8518 | 45000 | 3.4172 | 0.3797 |
| 3.3856 | 4.9596 | 46000 | 3.4142 | 0.3803 |
| 3.3102 | 5.0674 | 47000 | 3.4185 | 0.3801 |
| 3.3155 | 5.1752 | 48000 | 3.4126 | 0.3808 |
| 3.3386 | 5.2830 | 49000 | 3.4095 | 0.3811 |
| 3.3425 | 5.3908 | 50000 | 3.4081 | 0.3813 |
| 3.3433 | 5.4987 | 51000 | 3.4022 | 0.3820 |
| 3.3509 | 5.6065 | 52000 | 3.3964 | 0.3825 |
| 3.3434 | 5.7143 | 53000 | 3.3918 | 0.3827 |
| 3.3313 | 5.8221 | 54000 | 3.3866 | 0.3832 |
| 3.3542 | 5.9299 | 55000 | 3.3834 | 0.3838 |
| 3.2513 | 6.0377 | 56000 | 3.3855 | 0.3840 |
| 3.2695 | 6.1456 | 57000 | 3.3860 | 0.3838 |
| 3.2863 | 6.2534 | 58000 | 3.3822 | 0.3846 |
| 3.2716 | 6.3612 | 59000 | 3.3780 | 0.3849 |
| 3.2879 | 6.4690 | 60000 | 3.3752 | 0.3850 |
| 3.2794 | 6.5768 | 61000 | 3.3710 | 0.3856 |
| 3.3006 | 6.6846 | 62000 | 3.3657 | 0.3862 |
| 3.2749 | 6.7925 | 63000 | 3.3632 | 0.3862 |
| 3.2785 | 6.9003 | 64000 | 3.3577 | 0.3867 |
| 3.2066 | 7.0081 | 65000 | 3.3606 | 0.3872 |
| 3.2036 | 7.1159 | 66000 | 3.3619 | 0.3870 |
| 3.2288 | 7.2237 | 67000 | 3.3607 | 0.3875 |
| 3.2297 | 7.3315 | 68000 | 3.3549 | 0.3879 |
| 3.2247 | 7.4394 | 69000 | 3.3515 | 0.3877 |
| 3.2344 | 7.5472 | 70000 | 3.3469 | 0.3885 |
| 3.231 | 7.6550 | 71000 | 3.3447 | 0.3886 |
| 3.2348 | 7.7628 | 72000 | 3.3410 | 0.3894 |
| 3.2422 | 7.8706 | 73000 | 3.3376 | 0.3895 |
| 3.2537 | 7.9784 | 74000 | 3.3335 | 0.3902 |
| 3.1656 | 8.0863 | 75000 | 3.3394 | 0.3899 |
| 3.1882 | 8.1941 | 76000 | 3.3373 | 0.3901 |
| 3.1853 | 8.3019 | 77000 | 3.3346 | 0.3905 |
| 3.1767 | 8.4097 | 78000 | 3.3326 | 0.3906 |
| 3.1757 | 8.5175 | 79000 | 3.3295 | 0.3912 |
| 3.2105 | 8.6253 | 80000 | 3.3234 | 0.3916 |
| 3.165 | 8.7332 | 81000 | 3.3220 | 0.3917 |
| 3.1815 | 8.8410 | 82000 | 3.3194 | 0.3920 |
| 3.1949 | 8.9488 | 83000 | 3.3150 | 0.3925 |
| 3.1334 | 9.0566 | 84000 | 3.3191 | 0.3924 |
| 3.1266 | 9.1644 | 85000 | 3.3170 | 0.3927 |
| 3.127 | 9.2722 | 86000 | 3.3142 | 0.3930 |
| 3.1298 | 9.3801 | 87000 | 3.3129 | 0.3933 |
| 3.1258 | 9.4879 | 88000 | 3.3108 | 0.3934 |
| 3.1297 | 9.5957 | 89000 | 3.3069 | 0.3939 |
| 3.137 | 9.7035 | 90000 | 3.3053 | 0.3941 |
| 3.1448 | 9.8113 | 91000 | 3.3028 | 0.3943 |
| 3.146 | 9.9191 | 92000 | 3.3016 | 0.3945 |
### Framework versions
- Transformers 4.47.0.dev0
- Pytorch 2.5.0+cu124
- Datasets 3.0.2
- Tokenizers 0.20.1
|
ICELORD937/anya_5 | ICELORD937 | "2025-01-13T00:22:31Z" | 18 | 0 | diffusers | [
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2025-01-13T00:12:38Z" | ---
license: creativeml-openrail-m
---
|
QuantFactory/Llama-3-ELYZA-JP-8B-GGUF | QuantFactory | "2024-06-28T11:44:39Z" | 611 | 3 | transformers | [
"transformers",
"gguf",
"text-generation",
"ja",
"en",
"base_model:elyza/Llama-3-ELYZA-JP-8B",
"base_model:quantized:elyza/Llama-3-ELYZA-JP-8B",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-06-28T09:17:59Z" | ---
library_name: transformers
license: llama3
language:
- ja
- en
base_model: elyza/Llama-3-ELYZA-JP-8B
pipeline_tag: text-generation
---
## Llama-3-ELYZA-JP-8B- GGUF
This is quantized version of [elyza/Llama-3-ELYZA-JP-8B](https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B) created using llama.cpp
### Model Description

**Llama-3-ELYZA-JP-8B** is a large language model trained by [ELYZA, Inc](https://elyza.ai/).
Based on [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct), it has been enhanced for Japanese usage through additional pre-training and instruction tuning. (Built with Meta Llama3)
For more details, please refer to [our blog post](https://note.com/elyza/n/n360b6084fdbd).
### Usage
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
DEFAULT_SYSTEM_PROMPT = "あなたは誠実で優秀な日本人のアシスタントです。特に指示が無い場合は、常に日本語で回答してください。"
text = "仕事の熱意を取り戻すためのアイデアを5つ挙げてください。"
model_name = "elyza/Llama-3-ELYZA-JP-8B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto",
)
model.eval()
messages = [
{"role": "system", "content": DEFAULT_SYSTEM_PROMPT},
{"role": "user", "content": text},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
token_ids = tokenizer.encode(
prompt, add_special_tokens=False, return_tensors="pt"
)
with torch.no_grad():
output_ids = model.generate(
token_ids.to(model.device),
max_new_tokens=1200,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
output = tokenizer.decode(
output_ids.tolist()[0][token_ids.size(1):], skip_special_tokens=True
)
print(output)
```
### Developers
Listed in alphabetical order.
- [Masato Hirakawa](https://huggingface.co/m-hirakawa)
- [Shintaro Horie](https://huggingface.co/e-mon)
- [Tomoaki Nakamura](https://huggingface.co/tyoyo)
- [Daisuke Oba](https://huggingface.co/daisuk30ba)
- [Sam Passaglia](https://huggingface.co/passaglia)
- [Akira Sasaki](https://huggingface.co/akirasasaki)
### License
[Meta Llama 3 Community License](https://llama.meta.com/llama3/license/)
### How to Cite Original Model
```tex
@misc{elyzallama2024,
title={elyza/Llama-3-ELYZA-JP-8B},
url={https://huggingface.co/elyza/Llama-3-ELYZA-JP-8B},
author={Masato Hirakawa and Shintaro Horie and Tomoaki Nakamura and Daisuke Oba and Sam Passaglia and Akira Sasaki},
year={2024},
}
```
### Model Citations
```tex
@article{llama3modelcard,
title={Llama 3 Model Card},
author={AI@Meta},
year={2024},
url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
}
``` |
spacy/uk_core_news_lg | spacy | "2023-10-10T06:35:35Z" | 8 | 0 | spacy | [
"spacy",
"token-classification",
"uk",
"license:mit",
"model-index",
"region:us"
] | token-classification | "2023-01-23T13:47:04Z" | ---
tags:
- spacy
- token-classification
language:
- uk
license: mit
model-index:
- name: uk_core_news_lg
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.8768624014
- name: NER Recall
type: recall
value: 0.8813036776
- name: NER F Score
type: f_score
value: 0.87907743
- task:
name: TAG
type: token-classification
metrics:
- name: TAG (XPOS) Accuracy
type: accuracy
value: 0.9817440564
- task:
name: POS
type: token-classification
metrics:
- name: POS (UPOS) Accuracy
type: accuracy
value: 0.9817440564
- task:
name: MORPH
type: token-classification
metrics:
- name: Morph (UFeats) Accuracy
type: accuracy
value: 0.9520345072
- task:
name: LEMMA
type: token-classification
metrics:
- name: Lemma Accuracy
type: accuracy
value: 0.0
- task:
name: UNLABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Unlabeled Attachment Score (UAS)
type: f_score
value: 0.9379837528
- task:
name: LABELED_DEPENDENCIES
type: token-classification
metrics:
- name: Labeled Attachment Score (LAS)
type: f_score
value: 0.9169280929
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.9264398403
---
### Details: https://spacy.io/models/uk#uk_core_news_lg
Ukrainian pipeline optimized for CPU. Components: tok2vec, morphologizer, parser, senter, ner, attribute_ruler, lemmatizer.
| Feature | Description |
| --- | --- |
| **Name** | `uk_core_news_lg` |
| **Version** | `3.7.0` |
| **spaCy** | `>=3.7.0,<3.8.0` |
| **Default Pipeline** | `tok2vec`, `morphologizer`, `parser`, `attribute_ruler`, `lemmatizer`, `ner` |
| **Components** | `tok2vec`, `morphologizer`, `parser`, `senter`, `attribute_ruler`, `lemmatizer`, `ner` |
| **Vectors** | floret (200000, 300) |
| **Sources** | [Ukr-Synth (e5d9eaf3)](https://huggingface.co/datasets/ukr-models/Ukr-Synth) (Volodymyr Kurnosov)<br />[Explosion Vectors (OSCAR 2109 + Wikipedia + OpenSubtitles + WMT News Crawl)](https://github.com/explosion/spacy-vectors-builder) (Explosion) |
| **License** | `MIT` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (1211 labels for 3 components)</summary>
| Component | Labels |
| --- | --- |
| **`morphologizer`** | `POS=CCONJ`, `Degree=Cmp\|POS=ADV`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=PUNCT`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `POS=ADV\|PronType=Rel`, `POS=PART`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Aspect=Imp\|POS=VERB\|VerbForm=Inf`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Loc\|POS=ADP`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|NameType=Giv\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|NameType=Sur\|Number=Sing\|POS=PROPN`, `POS=ADV`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Gen\|POS=ADP`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Loc\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ\|Uninflect=Yes`, `Abbr=Yes\|Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Case=Nom\|NumType=Card\|POS=DET\|PronType=Ind`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Loc\|Number=Plur\|POS=ADJ`, `POS=SCONJ`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Aspect=Perf\|POS=VERB\|VerbForm=Inf`, `Degree=Pos\|POS=ADV`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Animacy=Anim\|Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Mood=Ind\|POS=VERB\|Person=0\|VerbForm=Fin`, `Case=Gen\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ\|Uninflect=Yes`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Animacy=Anim\|Case=Acc\|Number=Plur\|POS=DET\|PronType=Tot`, `POS=PART\|Polarity=Neg`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN\|Uninflect=Yes`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=PUNCT\|PunctType=Quot`, `POS=PUNCT\|PunctType=Dash`, `Aspect=Perf\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `POS=ADV\|PronType=Dem`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|POS=ADP`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Foreign=Yes\|POS=X`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Case=Ins\|POS=ADP`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Nom\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Abbr=Yes\|Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Number=Ptan\|POS=NOUN`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Rel`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Aspect=Perf\|Case=Nom\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Uninflect=Yes`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|POS=PRON\|PronType=Neg`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=SPACE`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Aspect=Perf\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|POS=VERB\|Tense=Past\|VerbForm=Conv`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Aspect=Imp\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Gen\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|NumType=Card\|POS=DET\|PronType=Dem`, `Animacy=Anim\|Case=Gen\|Number=Ptan\|POS=NOUN`, `Animacy=Inan\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Case=Gen\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Perf\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Animacy=Inan\|Case=Gen\|Number=Ptan\|POS=NOUN`, `Abbr=Yes\|Animacy=Anim\|Case=Nom\|Gender=Masc\|NameType=Giv\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Abbr=Yes\|Animacy=Anim\|Case=Nom\|Gender=Masc\|NameType=Sur\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Int`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Aspect=Perf\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|NumType=Ord\|Number=Sing\|POS=ADJ\|Uninflect=Yes`, `Animacy=Inan\|Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Aspect=Perf\|Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|NumType=Card\|POS=NUM\|Uninflect=Yes`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Abbr=Yes\|Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|NameType=Giv\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|NameType=Sur\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|NameType=Giv\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|NameType=Sur\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Uninflect=Yes`, `Animacy=Anim\|Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Case=Loc\|Number=Plur\|POS=DET\|PronType=Ind`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Ins\|Number=Plur\|POS=ADJ`, `Case=Gen\|NumType=Card\|POS=NUM\|Uninflect=Yes`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Anim\|Case=Dat\|Gender=Fem\|NameType=Giv\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|NameType=Sur\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Abbr=Yes\|Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Aspect=Perf\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin`, `Case=Loc\|Gender=Neut\|NumType=Ord\|Number=Sing\|POS=ADJ\|Uninflect=Yes`, `Abbr=Yes\|Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Case=Acc\|NumType=Card\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Abbr=Yes\|Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Animacy=Inan\|Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|NumType=Card\|POS=NUM\|Uninflect=Yes`, `Animacy=Inan\|Case=Acc\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NOUN`, `Case=Gen\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Dem`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Degree=Abs\|POS=ADV`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot\|Variant=Short`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Hyph=Yes\|POS=ADJ\|Variant=Short`, `Case=Nom\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Degree=Sup\|POS=ADV`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Abbr=Yes\|Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=DET\|PronType=Rel`, `Abbr=Yes\|Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Abbr=Yes\|Animacy=Anim\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Abbr=Yes\|Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Gen\|Number=Ptan\|POS=PROPN`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Aspect=Imp\|POS=AUX\|VerbForm=Inf`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Animacy=Inan\|Case=Nom\|Number=Plur\|POS=PROPN\|Uninflect=Yes`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=INTJ`, `Case=Acc\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Rel`, `Aspect=Perf\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Abbr=Yes\|Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Inan\|Case=Gen\|Foreign=Yes\|Gender=Masc\|Number=Sing\|POS=X\|Uninflect=Yes`, `Aspect=Imp\|Gender=Neut\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|NumType=Card\|POS=NUM\|Uninflect=Yes`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN\|Uninflect=Yes`, `Case=Loc\|NumType=Card\|POS=NUM\|Uninflect=Yes`, `Aspect=Perf\|Case=Loc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Gen\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Animacy=Anim\|Case=Ins\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Animacy=Anim\|Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|NameType=Giv\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|NameType=Sur\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|NameType=Giv\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|NameType=Sur\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Loc\|Number=Ptan\|POS=NOUN`, `Case=Gen\|Gender=Neut\|NumType=Ord\|Number=Sing\|POS=ADJ\|Uninflect=Yes`, `Case=Nom\|NumType=Card\|POS=NUM`, `POS=SYM`, `Case=Loc\|Gender=Neut\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Ins\|NumType=Card\|POS=NUM\|Uninflect=Yes`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Acc\|NumType=Card\|POS=NUM`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Abbr=Yes\|Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Case=Gen\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Animacy=Inan\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Tot`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Aspect=Perf\|Case=Ins\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|POS=VERB\|Tense=Pres\|VerbForm=Conv`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Tot`, `Abbr=Yes\|Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Abbr=Yes\|Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin`, `Case=Dat\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Case=Nom\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Fem\|NameType=Giv\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Case=Gen\|NumType=Card\|POS=NUM`, `Case=Ins\|Number=Plur\|POS=DET\|PronType=Rel`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|NameType=Sur\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Animacy=Inan\|Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Prs\|Reflex=Yes`, `Animacy=Inan\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Tot`, `Animacy=Anim\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Aspect=Perf\|Case=Ins\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|NameType=Sur\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|NameType=Sur\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Aspect=Perf\|Case=Nom\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Animacy=Inan\|Case=Ins\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Acc\|Number=Plur\|POS=DET\|PronType=Rel`, `Case=Loc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Uninflect=Yes`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Anim\|Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Abbr=Yes\|Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|NameType=Giv\|Number=Sing\|POS=PROPN`, `Hyph=Yes\|POS=ADJ`, `POS=ADV\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Voc\|Gender=Fem\|NameType=Giv\|Number=Sing\|POS=PROPN`, `Abbr=Yes\|Animacy=Inan\|Case=Gen\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NOUN\|Uninflect=Yes`, `Abbr=Yes\|Animacy=Inan\|Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN\|Uninflect=Yes`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NOUN`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `POS=ADV\|PronType=Neg`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Rel`, `Animacy=Anim\|Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Ins\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Ins\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|Variant=Short`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN\|Uninflect=Yes`, `Abbr=Yes\|Animacy=Inan\|Case=Gen\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NOUN\|Uninflect=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Abbr=Yes\|Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Animacy=Anim\|Case=Acc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Animacy=Anim\|Case=Gen\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=PART\|PartType=Conseq`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Animacy=Anim\|Case=Ins\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|NumType=Card\|POS=DET\|PronType=Ind`, `Mood=Cnd\|POS=AUX`, `Abbr=Yes\|Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Case=Gen\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Aspect=Perf\|Case=Loc\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=DET\|PronType=Dem`, `Abbr=Yes\|Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Animacy=Inan\|Case=Nom\|Number=Ptan\|POS=NOUN`, `Aspect=Perf\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN\|Uninflect=Yes`, `Case=Dat\|POS=ADP`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Neut\|NumType=Card\|POS=NUM`, `Animacy=Anim\|Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ\|Uninflect=Yes`, `Case=Loc\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Gen\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NOUN`, `Animacy=Anim\|Case=Nom\|POS=PRON\|PronType=Ind`, `Abbr=Yes\|Animacy=Inan\|Case=Gen\|Number=Ptan\|POS=NOUN\|Uninflect=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg\|Variant=Short`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Imp\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=X`, `Case=Nom\|Gender=Masc\|NumType=Card\|POS=NUM\|Uninflect=Yes`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Aspect=Imp\|Gender=Fem\|Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Animacy=Inan\|Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Animacy=Inan\|Case=Ins\|Number=Ptan\|POS=NOUN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Inan\|Case=Gen\|Number=Ptan\|POS=NOUN\|Uninflect=Yes`, `POS=ADV\|PronType=Int`, `Aspect=Imp\|POS=VERB\|Polarity=Neg\|Tense=Pres\|VerbForm=Conv`, `Aspect=Imp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Case=Acc\|NumType=Card\|Number=Plur\|POS=NUM\|Uninflect=Yes`, `Animacy=Inan\|Case=Gen\|Number=Ptan\|POS=PROPN\|Uninflect=Yes`, `Case=Nom\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Inan\|Case=Nom\|Number=Ptan\|POS=PROPN\|Uninflect=Yes`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Loc\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Loc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Case=Gen\|Gender=Masc\|NumType=Card\|POS=NUM`, `Animacy=Inan\|Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Animacy=Inan\|Aspect=Perf\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Animacy=Anim\|Case=Nom\|POS=PRON\|PronType=Neg`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|NameType=Giv\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Ins\|Gender=Masc\|NameType=Sur\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Fem\|NumType=Card\|POS=NUM\|Uninflect=Yes`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Uninflect=Yes`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Animacy=Inan\|Case=Loc\|Number=Ptan\|POS=PROPN\|Uninflect=Yes`, `Aspect=Imp\|Case=Ins\|Number=Plur\|POS=ADJ\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Uninflect=Yes`, `Animacy=Anim\|Case=Acc\|Number=Ptan\|POS=NOUN`, `Animacy=Anim\|Case=Gen\|Gender=Fem\|NameType=Sur\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=DET\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|NumType=Card\|POS=NUM`, `Case=Ins\|Gender=Masc\|NumType=Card\|POS=NUM`, `Case=Acc\|Gender=Masc\|NumType=Card\|POS=NUM`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Uninflect=Yes`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Degree=Pos\|Gender=Neut\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|Case=Ins\|Degree=Pos\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Uninflect=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Abbr=Yes\|Animacy=Inan\|Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Animacy=Anim\|Animacy[gram]=Inan\|Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Ins\|Degree=Pos\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Gen\|Gender=Masc\|NameType=Giv\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Case=Gen\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Rel`, `Animacy=Inan\|Case=Loc\|Number=Ptan\|POS=PROPN`, `Aspect=Imp\|Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Neg`, `Animacy=Inan\|Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Rel`, `Aspect=Imp\|Case=Nom\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Uninflect=Yes`, `Case=Nom\|Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|NameType=Giv\|Number=Sing\|POS=PROPN\|Uninflect=Yes`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Abbr=Yes\|Animacy=Inan\|Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN\|Uninflect=Yes`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Uninflect=Yes`, `Aspect=Perf\|Case=Gen\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Number=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Gen\|POS=PRON\|PronType=Neg`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Animacy=Inan\|Case=Acc\|Number=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|NumType=Card\|POS=NUM\|Uninflect=Yes`, `Animacy=Inan\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Aspect=Imp\|Case=Acc\|Gender=Fem\|Number=Sing\|POS=ADJ\|Tense=Pres\|VerbForm=Part\|Voice=Act`, `Abbr=Yes\|Animacy=Anim\|Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN\|Uninflect=Yes`, `Case=Dat\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Ins\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Uninflect=Yes`, `Case=Gen\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|NameType=Giv\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Dat\|Gender=Masc\|NameType=Sur\|Number=Sing\|POS=PROPN`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Ind`, _(truncated: full list in pipeline meta)_ |
| **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advcl:sp`, `advcl:svc`, `advmod`, `advmod:det`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `det:numgov`, `discourse`, `expl`, `fixed`, `flat:abs`, `flat:foreign`, `flat:name`, `flat:range`, `flat:repeat`, `flat:sibl`, `flat:title`, `iobj`, `mark`, `nmod`, `nsubj`, `nummod`, `nummod:gov`, `obj`, `obl`, `orphan`, `parataxis`, `parataxis:discourse`, `punct`, `vocative`, `xcomp`, `xcomp:sp` |
| **`ner`** | `LOC`, `ORG`, `PER` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_ACC` | 99.99 |
| `TOKEN_P` | 99.99 |
| `TOKEN_R` | 99.97 |
| `TOKEN_F` | 99.98 |
| `POS_ACC` | 98.17 |
| `MORPH_ACC` | 95.20 |
| `MORPH_MICRO_P` | 97.88 |
| `MORPH_MICRO_R` | 97.16 |
| `MORPH_MICRO_F` | 97.52 |
| `SENTS_P` | 94.48 |
| `SENTS_R` | 90.88 |
| `SENTS_F` | 92.64 |
| `DEP_UAS` | 93.80 |
| `DEP_LAS` | 91.69 |
| `TAG_ACC` | 98.17 |
| `LEMMA_ACC` | 0.00 |
| `ENTS_P` | 87.69 |
| `ENTS_R` | 88.13 |
| `ENTS_F` | 87.91 | |
tomoohive/PyramidTraining | tomoohive | "2023-08-03T03:54:13Z" | 5 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2023-08-03T03:52:37Z" | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: tomoohive/PyramidTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
lesso17/60eca64e-52b4-4974-84ff-099c272d1b07 | lesso17 | "2025-01-31T01:29:09Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:DeepMount00/Llama-3-8b-Ita",
"base_model:adapter:DeepMount00/Llama-3-8b-Ita",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-31T00:57:40Z" | ---
library_name: peft
license: llama3
base_model: DeepMount00/Llama-3-8b-Ita
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 60eca64e-52b4-4974-84ff-099c272d1b07
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: DeepMount00/Llama-3-8b-Ita
bf16: auto
chat_template: llama3
datasets:
- data_files:
- 6106277caaffaa53_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6106277caaffaa53_train_data.json
type:
field_instruction: citing_prompt
field_output: holding_0
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso17/60eca64e-52b4-4974-84ff-099c272d1b07
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/6106277caaffaa53_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 6926cf8f-0e36-4597-957b-85f8ad09dc8a
wandb_project: new-01-29
wandb_run: your_name
wandb_runid: 6926cf8f-0e36-4597-957b-85f8ad09dc8a
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 60eca64e-52b4-4974-84ff-099c272d1b07
This model is a fine-tuned version of [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.6501 | 0.0318 | 200 | 2.2507 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
jeiku/Aura_Qwen_7B | jeiku | "2024-06-13T06:05:51Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:ResplendentAI/Qwen_jeiku_LoRA_128",
"base_model:merge:ResplendentAI/Qwen_jeiku_LoRA_128",
"base_model:jeiku/dontusethis",
"base_model:merge:jeiku/dontusethis",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-13T06:03:05Z" | ---
base_model:
- jeiku/dontusethis
- ResplendentAI/Qwen_jeiku_LoRA_128
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [jeiku/dontusethis](https://huggingface.co/jeiku/dontusethis) + [ResplendentAI/Qwen_jeiku_LoRA_128](https://huggingface.co/ResplendentAI/Qwen_jeiku_LoRA_128)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: jeiku/dontusethis+ResplendentAI/Qwen_jeiku_LoRA_128
merge_method: passthrough
dtype: float16
```
|
LoneStriker/HuginnV5.5-12.6B-GGUF | LoneStriker | "2024-01-28T14:13:39Z" | 0 | 1 | null | [
"gguf",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-01-28T13:41:12Z" | ---
license: cc-by-4.0
---

### Huginn V5.5
Experimental frankenmerge using multiple 7B models using the Dare-ties method.
Including:
### Part 1:
* https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1
* https://huggingface.co/maywell/Synatra-7B-v0.3-RP
### Part 2:
* https://huggingface.co/mlabonne/NeuralBeagle14-7B
* https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B-v2
### Part 3:
merged part 1 and part 2 together
### Part 4:
then took the first 26 layers of https://huggingface.co/FelixChao/WestSeverus-7B-DPO-v2 and added them before the 32 layers of part 3 to make the final model
### Prompting and scope:
seems to work well with alpaca for instructions, and chatML format for just normal conversation.
scores like just under 73 points on the leaderboard, way higher than any huginn model before, by a factor of around 10 points.
Huginn primarily excells at conversational tasks, and creative tasks, being capable at story writing, roleplaying, even helping writers with creative tasks,
(Huginn is capable of coming up with creative ideas better than most other models)
|
dcduplooy/ppo-LunarLander-v2 | dcduplooy | "2023-03-15T19:06:08Z" | 3 | 0 | transformers | [
"transformers",
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"endpoints_compatible",
"region:us"
] | reinforcement-learning | "2023-02-21T21:40:47Z" | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 62.98 +/- 95.87
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 500000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'dcduplooy/ppo-LunarLander-v2'
'batch_size': 512
'minibatch_size': 128}
```
|
mradermacher/magistrate-3.2-3b-base-GGUF | mradermacher | "2025-03-14T17:28:44Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"llama-3",
"spectrum",
"axolotl",
"en",
"dataset:macadeliccc/US-SupremeCourtVerdicts",
"dataset:macadeliccc/US-FederalLaws",
"base_model:macadeliccc/magistrate-3.2-3b-base",
"base_model:quantized:macadeliccc/magistrate-3.2-3b-base",
"license:llama3.2",
"endpoints_compatible",
"region:us"
] | null | "2025-03-14T16:54:09Z" | ---
base_model: macadeliccc/magistrate-3.2-3b-base
datasets:
- macadeliccc/US-SupremeCourtVerdicts
- macadeliccc/US-FederalLaws
language:
- en
library_name: transformers
license: llama3.2
license_link: https://huggingface.co/meta-llama/Llama-3.2-3B/blob/main/LICENSE.txt
quantized_by: mradermacher
tags:
- generated_from_trainer
- llama-3
- spectrum
- axolotl
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/macadeliccc/magistrate-3.2-3b-base
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/magistrate-3.2-3b-base-GGUF/resolve/main/magistrate-3.2-3b-base.Q2_K.gguf) | Q2_K | 1.5 | |
| [GGUF](https://huggingface.co/mradermacher/magistrate-3.2-3b-base-GGUF/resolve/main/magistrate-3.2-3b-base.Q3_K_S.gguf) | Q3_K_S | 1.6 | |
| [GGUF](https://huggingface.co/mradermacher/magistrate-3.2-3b-base-GGUF/resolve/main/magistrate-3.2-3b-base.Q3_K_M.gguf) | Q3_K_M | 1.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/magistrate-3.2-3b-base-GGUF/resolve/main/magistrate-3.2-3b-base.Q3_K_L.gguf) | Q3_K_L | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/magistrate-3.2-3b-base-GGUF/resolve/main/magistrate-3.2-3b-base.IQ4_XS.gguf) | IQ4_XS | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/magistrate-3.2-3b-base-GGUF/resolve/main/magistrate-3.2-3b-base.Q4_K_S.gguf) | Q4_K_S | 2.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/magistrate-3.2-3b-base-GGUF/resolve/main/magistrate-3.2-3b-base.Q4_K_M.gguf) | Q4_K_M | 2.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/magistrate-3.2-3b-base-GGUF/resolve/main/magistrate-3.2-3b-base.Q5_K_S.gguf) | Q5_K_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/magistrate-3.2-3b-base-GGUF/resolve/main/magistrate-3.2-3b-base.Q5_K_M.gguf) | Q5_K_M | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/magistrate-3.2-3b-base-GGUF/resolve/main/magistrate-3.2-3b-base.Q6_K.gguf) | Q6_K | 2.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/magistrate-3.2-3b-base-GGUF/resolve/main/magistrate-3.2-3b-base.Q8_0.gguf) | Q8_0 | 3.5 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/magistrate-3.2-3b-base-GGUF/resolve/main/magistrate-3.2-3b-base.f16.gguf) | f16 | 6.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nouraa5/whisper-small-arabic | nouraa5 | "2025-04-15T02:32:16Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"ar",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2025-04-14T19:48:04Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
research-backup/roberta-large-conceptnet-average-no-mask-prompt-b-nce | research-backup | "2022-09-21T00:18:30Z" | 104 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"dataset:relbert/conceptnet_high_confidence",
"model-index",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2022-08-08T20:45:41Z" | ---
datasets:
- relbert/conceptnet_high_confidence
model-index:
- name: relbert/roberta-large-conceptnet-average-no-mask-prompt-b-nce
results:
- task:
name: Relation Mapping
type: sorting-task
dataset:
name: Relation Mapping
args: relbert/relation_mapping
type: relation-mapping
metrics:
- name: Accuracy
type: accuracy
value: 0.8198809523809524
- task:
name: Analogy Questions (SAT full)
type: multiple-choice-qa
dataset:
name: SAT full
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5294117647058824
- task:
name: Analogy Questions (SAT)
type: multiple-choice-qa
dataset:
name: SAT
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5252225519287834
- task:
name: Analogy Questions (BATS)
type: multiple-choice-qa
dataset:
name: BATS
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.7821011673151751
- task:
name: Analogy Questions (Google)
type: multiple-choice-qa
dataset:
name: Google
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.894
- task:
name: Analogy Questions (U2)
type: multiple-choice-qa
dataset:
name: U2
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5263157894736842
- task:
name: Analogy Questions (U4)
type: multiple-choice-qa
dataset:
name: U4
args: relbert/analogy_questions
type: analogy-questions
metrics:
- name: Accuracy
type: accuracy
value: 0.5717592592592593
- task:
name: Lexical Relation Classification (BLESS)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9020641856260359
- name: F1 (macro)
type: f1_macro
value: 0.8948753350691158
- task:
name: Lexical Relation Classification (CogALexV)
type: classification
dataset:
name: CogALexV
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.846244131455399
- name: F1 (macro)
type: f1_macro
value: 0.6730554272487049
- task:
name: Lexical Relation Classification (EVALution)
type: classification
dataset:
name: BLESS
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.6625135427952329
- name: F1 (macro)
type: f1_macro
value: 0.6558813092612158
- task:
name: Lexical Relation Classification (K&H+N)
type: classification
dataset:
name: K&H+N
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.9580580093204424
- name: F1 (macro)
type: f1_macro
value: 0.8732893037249027
- task:
name: Lexical Relation Classification (ROOT09)
type: classification
dataset:
name: ROOT09
args: relbert/lexical_relation_classification
type: relation-classification
metrics:
- name: F1
type: f1
value: 0.8884362268881228
- name: F1 (macro)
type: f1_macro
value: 0.8878260786406326
---
# relbert/roberta-large-conceptnet-average-no-mask-prompt-b-nce
RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on
[relbert/conceptnet_high_confidence](https://huggingface.co/datasets/relbert/conceptnet_high_confidence).
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-conceptnet-average-no-mask-prompt-b-nce/raw/main/analogy.json)):
- Accuracy on SAT (full): 0.5294117647058824
- Accuracy on SAT: 0.5252225519287834
- Accuracy on BATS: 0.7821011673151751
- Accuracy on U2: 0.5263157894736842
- Accuracy on U4: 0.5717592592592593
- Accuracy on Google: 0.894
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-conceptnet-average-no-mask-prompt-b-nce/raw/main/classification.json)):
- Micro F1 score on BLESS: 0.9020641856260359
- Micro F1 score on CogALexV: 0.846244131455399
- Micro F1 score on EVALution: 0.6625135427952329
- Micro F1 score on K&H+N: 0.9580580093204424
- Micro F1 score on ROOT09: 0.8884362268881228
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-conceptnet-average-no-mask-prompt-b-nce/raw/main/relation_mapping.json)):
- Accuracy on Relation Mapping: 0.8198809523809524
### Usage
This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip
```shell
pip install relbert
```
and activate model as below.
```python
from relbert import RelBERT
model = RelBERT("relbert/roberta-large-conceptnet-average-no-mask-prompt-b-nce")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
```
### Training hyperparameters
The following hyperparameters were used during training:
- model: roberta-large
- max_length: 64
- mode: average_no_mask
- data: relbert/conceptnet_high_confidence
- template_mode: manual
- template: Today, I finally discovered the relation between <subj> and <obj> : <obj> is <subj>'s <mask>
- loss_function: nce_logout
- temperature_nce_constant: 0.05
- temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'}
- epoch: 86
- batch: 128
- lr: 5e-06
- lr_decay: False
- lr_warmup: 1
- weight_decay: 0
- random_seed: 0
- exclude_relation: None
- n_sample: 640
- gradient_accumulation: 8
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-conceptnet-average-no-mask-prompt-b-nce/raw/main/trainer_config.json).
### Reference
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
```
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}
```
|
kevembuvak/speecht5_finetuned_kerem-tr | kevembuvak | "2025-03-09T15:17:23Z" | 15 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | "2025-03-01T16:02:15Z" | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_kerem-tr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_kerem-tr
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4918
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 800
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7141 | 0.2567 | 100 | 0.6743 |
| 0.6256 | 0.5133 | 200 | 0.5838 |
| 0.5663 | 0.7700 | 300 | 0.5397 |
| 0.558 | 1.0282 | 400 | 0.5211 |
| 0.5262 | 1.2849 | 500 | 0.5128 |
| 0.5253 | 1.5415 | 600 | 0.5030 |
| 0.5133 | 1.7982 | 700 | 0.4985 |
| 0.5165 | 2.0565 | 800 | 0.4918 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
AnAmbitiousMonk/ppo-LunarLander-v2 | AnAmbitiousMonk | "2024-09-14T05:46:26Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-08T15:20:38Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.35 +/- 21.87
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Abzu/mpt-7b-q8 | Abzu | "2023-07-06T15:26:24Z" | 145 | 1 | transformers | [
"transformers",
"safetensors",
"mpt",
"text-generation",
"Composer",
"MosaicML",
"llm-foundry",
"StreamingDatasets",
"custom_code",
"dataset:mc4",
"dataset:c4",
"dataset:togethercomputer/RedPajama-Data-1T",
"dataset:bigcode/the-stack",
"dataset:allenai/s2orc",
"arxiv:2108.12409",
"arxiv:2302.13971",
"arxiv:2205.14135",
"arxiv:2010.04245",
"arxiv:1909.08053",
"arxiv:2302.06675",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | text-generation | "2023-07-06T15:20:49Z" | ---
license: apache-2.0
tags:
- Composer
- MosaicML
- llm-foundry
- StreamingDatasets
datasets:
- mc4
- c4
- togethercomputer/RedPajama-Data-1T
- bigcode/the-stack
- allenai/s2orc
inference: false
---
# MPT-7B
MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code.
This model was trained by [MosaicML](https://www.mosaicml.com).
MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference.
These architectural changes include performance-optimized layer implementations and the elimination of context length limits by replacing
positional embeddings with Attention with Linear Biases ([ALiBi](https://arxiv.org/abs/2108.12409)).
Thanks to these modifications, MPT models can be trained with high throughput efficiency and stable convergence.
MPT models can also be served efficiently with both standard HuggingFace pipelines and NVIDIA's [FasterTransformer](https://github.com/NVIDIA/FasterTransformer).
This model uses the MosaicML LLM codebase, which can be found in the [llm-foundry repository](https://github.com/mosaicml/llm-foundry). It was trained by MosaicML’s NLP team on the [MosaicML platform](https://www.mosaicml.com/training) for LLM pretraining, finetuning, and inference.
### How is this model different?
MPT-7B is
* **Licensed for the possibility of commercial use** (unlike [LLaMA](https://arxiv.org/abs/2302.13971)).
* **Trained on a large amount of data** (1T tokens like [LLaMA](https://arxiv.org/abs/2302.13971) vs. 300B for [Pythia](https://github.com/EleutherAI/pythia), 300B for [OpenLLaMA](https://github.com/openlm-research/open_llama), and 800B for [StableLM](https://github.com/Stability-AI/StableLM)).
* **Prepared to handle extremely long inputs** thanks to [ALiBi](https://arxiv.org/abs/2108.12409) (we finetuned [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter) on up to 65k inputs and can handle up to 84k vs. 2k-4k for other open source models).
* **Capable of fast training and inference** (via [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf) and [FasterTransformer](https://github.com/NVIDIA/FasterTransformer))
* **Equipped with highly efficient open-source training code** via the [llm-foundry repository](https://github.com/mosaicml/llm-foundry)
### Models finetuned off MPT-7B:
The following models are finetuned on MPT-7B:
* [MPT-7B-StoryWriter-65k+](https://huggingface.co/mosaicml/mpt-7b-storywriter): a model designed to read and write fictional stories with super long context lengths.
Built by finetuning MPT-7B with a context length of 65k tokens on a filtered fiction subset of the [books3 dataset](https://huggingface.co/datasets/the_pile_books3).
At inference time, thanks to [ALiBi](https://arxiv.org/abs/2108.12409), MPT-7B-StoryWriter-65k+ can extrapolate even beyond 65k tokens.
We demonstrate generations as long as 80k tokens on a single A100-80GB GPU in our [blogpost](www.mosaicml.com/blog/mpt-7b).
* License: Apache 2.0
* [MPT-7B-Instruct](https://huggingface.co/mosaicml/mpt-7b-instruct): a model for short-form instruction following.
Built by finetuning MPT-7B on a [dataset](https://huggingface.co/datasets/mosaicml/dolly_hhrlhf) we also release, derived from the [Databricks Dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k) and the [Anthropic Helpful and Harmless (HH-RLHF)](https://huggingface.co/datasets/Anthropic/hh-rlhf) datasets.
* License: _CC-By-SA-3.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-instruct)
* [MPT-7B-Chat](https://huggingface.co/mosaicml/mpt-7b-chat): a chatbot-like model for dialogue generation.
Built by finetuning MPT-7B on the [ShareGPT-Vicuna](https://huggingface.co/datasets/jeffwan/sharegpt_vicuna), [HC3](https://huggingface.co/datasets/Hello-SimpleAI/HC3),
[Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), [HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), and [Evol-Instruct](https://huggingface.co/datasets/victor123/evol_instruct_70k) datasets.
* License: _CC-By-NC-SA-4.0_
* [Demo on Hugging Face Spaces](https://huggingface.co/spaces/mosaicml/mpt-7b-chat)
## Model Date
May 5, 2023
## Model License
Apache-2.0
## Documentation
* [Blog post: Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs](https://www.mosaicml.com/blog/mpt-7b)
* [Codebase (mosaicml/llm-foundry repo)](https://github.com/mosaicml/llm-foundry/)
* Questions: Feel free to contact us via the [MosaicML Community Slack](https://mosaicml.me/slack)!
## How to Use
This model is best used with the MosaicML [llm-foundry repository](https://github.com/mosaicml/llm-foundry) for training and finetuning.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained(
'mosaicml/mpt-7b',
trust_remote_code=True
)
```
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom `MPT` model architecture that is not yet part of the Hugging Face `transformers` package.
`MPT` includes options for many training efficiency features such as [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf), [ALiBi](https://arxiv.org/abs/2108.12409), [QK LayerNorm](https://arxiv.org/abs/2010.04245), and more.
To use the optimized [triton implementation](https://github.com/openai/triton) of FlashAttention, you can load the model on GPU (`cuda:0`) with `attn_impl='triton'` and with `bfloat16` precision:
```python
import torch
import transformers
name = 'mosaicml/mpt-7b'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.attn_config['attn_impl'] = 'triton'
config.init_device = 'cuda:0' # For fast initialization directly on GPU!
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
torch_dtype=torch.bfloat16, # Load model weights in bfloat16
trust_remote_code=True
)
```
Although the model was trained with a sequence length of 2048, ALiBi enables users to increase the maximum sequence length during finetuning and/or inference. For example:
```python
import transformers
name = 'mosaicml/mpt-7b'
config = transformers.AutoConfig.from_pretrained(name, trust_remote_code=True)
config.max_seq_len = 4096 # (input + output) tokens can now be up to 4096
model = transformers.AutoModelForCausalLM.from_pretrained(
name,
config=config,
trust_remote_code=True
)
```
This model was trained with the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('EleutherAI/gpt-neox-20b')
```
The model can then be used, for example, within a text-generation pipeline.
Note: when running Torch modules in lower precision, it is best practice to use the [torch.autocast context manager](https://pytorch.org/docs/stable/amp.html).
```python
from transformers import pipeline
pipe = pipeline('text-generation', model=model, tokenizer=tokenizer, device='cuda:0')
with torch.autocast('cuda', dtype=torch.bfloat16):
print(
pipe('Here is a recipe for vegan banana bread:\n',
max_new_tokens=100,
do_sample=True,
use_cache=True))
```
## Model Description
The architecture is a modification of a standard decoder-only transformer.
The model has been modified from a standard transformer in the following ways:
* It uses [FlashAttention](https://arxiv.org/pdf/2205.14135.pdf)
* It uses [ALiBi (Attention with Linear Biases)](https://arxiv.org/abs/2108.12409) and does not use positional embeddings
* It does not use biases
| Hyperparameter | Value |
|----------------|-------|
|n_parameters | 6.7B |
|n_layers | 32 |
| n_heads | 32 |
| d_model | 4096 |
| vocab size | 50432 |
| sequence length | 2048 |
## Training Data
### Streaming Datasets
Data was formatted using the MosaicML [StreamingDataset](https://github.com/mosaicml/streaming) library to host our data in object storage and efficiently stream it to our compute cluster during training.
StreamingDataset obviates the need to download the whole dataset before starting training, and allows instant resumption of training from any point in the dataset.
### Data Mix
The model was trained for 1T tokens (with batch size 1760 and sequence length 2048). It was trained on the following data mix:
| Data Source | Number of Tokens in Source | Proportion | Effective Number of Tokens | Epochs |
|-------------|----------------------------|------------|----------------------------|--------|
| mC4 3.1.0 - English | 417.99 B | 0.33 | 330 B | 0.14 |
| C4 - English - SemDedup 80% | 100.42 B | 0.299 | 299 B | 2.98 |
| RedPajama - CommonCrawl | 878.45 B | 0.1 | 100 B | 0.11 |
| The Stack - Selected Languages | 463.78 B | 0.1 | 100 B | 0.22 |
| RedPajama - Wikipedia - En | 4.87 B | 0.04 | 40 B | 8.21 |
| The Stack - Markdown | 107.07 B | 0.035 | 35 B | 0.33 |
| S2ORC | 48.85 B | 0.033 | 33 B | 0.68 |
| RedPajama - Books | 26.02 B | 0.03 | 30B | 1.15 |
| RedPajama - arXiv | 28.10 B | 0.019 | 19 B | 0.68 |
| RedPajama - StackExchange | 20.54 B | 0.014 | 14 B |0.68 |
Samples for each batch were selected from one of the datasets with the probability specified above.
The examples were shuffled within each dataset, and each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length.
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer. This BPE tokenizer has a number of desirable characteristics,
most of which are relevant for tokenizing code:
(1) It was trained on a diverse mix of data that includes code (The Pile)
(2) It applies consistent space delimitation, unlike the GPT2 tokenizer which tokenizes inconsistently depending on the presence of prefix spaces
(3) It contains tokens for repeated space characters, which allows superior compression of text with large amounts of repeated space characters.
The model vocabulary size of 50432 was set to be a multiple of 128 (as in [MEGATRON-LM](https://arxiv.org/abs/1909.08053)), model flop utilization (MFU) increased by up to four percentage points.
### Training Configuration
This model was trained on 440 A100-40GBs for about 9.5 days using the [MosaicML Platform](https://www.mosaicml.com/platform).
The model was trained with sharded data parallelism using [FSDP](https://pytorch.org/docs/stable/fsdp.html) and used the [LION](https://arxiv.org/abs/2302.06675) optimizer.
## Limitations and Biases
_The following language is modified from [EleutherAI's GPT-NeoX-20B](https://huggingface.co/EleutherAI/gpt-neox-20b)_
MPT-7B (Base) is **not** intended for deployment without finetuning.
It should not be used for human-facing interactions without further guardrails and user consent.
MPT-7B can produce factually incorrect output, and should not be relied on to produce factually accurate information.
MPT-7B was trained on various public datasets.
While great efforts have been taken to clean the pretraining data, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
## MosaicML Platform
If you're interested in [training](https://www.mosaicml.com/training) and [deploying](https://www.mosaicml.com/inference) your own MPT or LLMs on the MosaicML Platform, [sign up here](https://forms.mosaicml.com/demo?utm_source=huggingface&utm_medium=referral&utm_campaign=mpt-7b).
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please cosult an attorney before using this model for commercial purposes.
## Citation
Please cite this model using the following format:
```
@online{MosaicML2023Introducing,
author = {MosaicML NLP Team},
title = {Introducing MPT-7B: A New Standard for Open-Source,
Commercially Usable LLMs},
year = {2023},
url = {www.mosaicml.com/blog/mpt-7b},
note = {Accessed: 2023-05-05},
urldate = {2023-05-05}
}
```
|
tensorblock/Mistral-RAG-GGUF | tensorblock | "2024-11-16T01:08:20Z" | 22 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"it",
"dataset:DeepMount00/gquad_it",
"base_model:DeepMount00/Mistral-RAG",
"base_model:quantized:DeepMount00/Mistral-RAG",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-11-11T13:46:43Z" | ---
library_name: transformers
license: apache-2.0
datasets:
- DeepMount00/gquad_it
language:
- it
tags:
- TensorBlock
- GGUF
base_model: DeepMount00/Mistral-RAG
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## DeepMount00/Mistral-RAG - GGUF
This repo contains GGUF format model files for [DeepMount00/Mistral-RAG](https://huggingface.co/DeepMount00/Mistral-RAG).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Mistral-RAG-Q2_K.gguf](https://huggingface.co/tensorblock/Mistral-RAG-GGUF/blob/main/Mistral-RAG-Q2_K.gguf) | Q2_K | 2.532 GB | smallest, significant quality loss - not recommended for most purposes |
| [Mistral-RAG-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mistral-RAG-GGUF/blob/main/Mistral-RAG-Q3_K_S.gguf) | Q3_K_S | 2.947 GB | very small, high quality loss |
| [Mistral-RAG-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mistral-RAG-GGUF/blob/main/Mistral-RAG-Q3_K_M.gguf) | Q3_K_M | 3.277 GB | very small, high quality loss |
| [Mistral-RAG-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mistral-RAG-GGUF/blob/main/Mistral-RAG-Q3_K_L.gguf) | Q3_K_L | 3.560 GB | small, substantial quality loss |
| [Mistral-RAG-Q4_0.gguf](https://huggingface.co/tensorblock/Mistral-RAG-GGUF/blob/main/Mistral-RAG-Q4_0.gguf) | Q4_0 | 3.827 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Mistral-RAG-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mistral-RAG-GGUF/blob/main/Mistral-RAG-Q4_K_S.gguf) | Q4_K_S | 3.856 GB | small, greater quality loss |
| [Mistral-RAG-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mistral-RAG-GGUF/blob/main/Mistral-RAG-Q4_K_M.gguf) | Q4_K_M | 4.068 GB | medium, balanced quality - recommended |
| [Mistral-RAG-Q5_0.gguf](https://huggingface.co/tensorblock/Mistral-RAG-GGUF/blob/main/Mistral-RAG-Q5_0.gguf) | Q5_0 | 4.654 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Mistral-RAG-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mistral-RAG-GGUF/blob/main/Mistral-RAG-Q5_K_S.gguf) | Q5_K_S | 4.654 GB | large, low quality loss - recommended |
| [Mistral-RAG-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mistral-RAG-GGUF/blob/main/Mistral-RAG-Q5_K_M.gguf) | Q5_K_M | 4.779 GB | large, very low quality loss - recommended |
| [Mistral-RAG-Q6_K.gguf](https://huggingface.co/tensorblock/Mistral-RAG-GGUF/blob/main/Mistral-RAG-Q6_K.gguf) | Q6_K | 5.534 GB | very large, extremely low quality loss |
| [Mistral-RAG-Q8_0.gguf](https://huggingface.co/tensorblock/Mistral-RAG-GGUF/blob/main/Mistral-RAG-Q8_0.gguf) | Q8_0 | 7.167 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Mistral-RAG-GGUF --include "Mistral-RAG-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Mistral-RAG-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
mradermacher/yaoi-v1-instruct-GGUF | mradermacher | "2024-08-30T21:41:09Z" | 18 | 1 | transformers | [
"transformers",
"gguf",
"code",
"yaoi",
"en",
"base_model:Ichate/yaoi-v1-instruct",
"base_model:quantized:Ichate/yaoi-v1-instruct",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2024-08-30T12:31:07Z" | ---
base_model: Ichate/yaoi-v1-instruct
language:
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- code
- yaoi
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Ichate/yaoi-v1-instruct
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/yaoi-v1-instruct-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.Q2_K.gguf) | Q2_K | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.IQ3_XS.gguf) | IQ3_XS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.IQ3_S.gguf) | IQ3_S | 3.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.Q3_K_S.gguf) | Q3_K_S | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.IQ3_M.gguf) | IQ3_M | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.Q3_K_M.gguf) | Q3_K_M | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.Q3_K_L.gguf) | Q3_K_L | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.IQ4_XS.gguf) | IQ4_XS | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.Q4_K_M.gguf) | Q4_K_M | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.Q5_K_M.gguf) | Q5_K_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.Q6_K.gguf) | Q6_K | 5.6 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/yaoi-v1-instruct-GGUF/resolve/main/yaoi-v1-instruct.Q8_0.gguf) | Q8_0 | 7.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jaimin/parrot_adequacy_model | jaimin | "2022-11-25T04:13:48Z" | 161 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-11-22T05:56:21Z" | ---
license: apache-2.0
---
Parrot
THIS IS AN ANCILLARY MODEL FOR PARROT PARAPHRASER
1. What is Parrot?
Parrot is a paraphrase-based utterance augmentation framework purpose-built to accelerate training NLU models. A paraphrase framework is more than just a paraphrasing model. |
Kuldipsaro/nainaface-lora | Kuldipsaro | "2025-04-13T07:32:47Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2025-04-13T07:32:45Z" | ---
license: creativeml-openrail-m
---
|
haobozhang/dolly-adv-1.0-epoch2 | haobozhang | "2024-07-25T21:50:39Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-25T21:45:51Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Kyle1668/boss-toxicity-12000-bert-base-uncased | Kyle1668 | "2023-11-07T00:52:55Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-11-07T00:08:55Z" | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: boss-toxicity-12000-bert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# boss-toxicity-12000-bert-base-uncased
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- F1: 0.6566
- Acc: 0.8020
- Loss: 1.1220
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | F1 | Acc | Validation Loss |
|:-------------:|:-----:|:----:|:------:|:------:|:---------------:|
| 0.5963 | 1.0 | 750 | 0.7051 | 0.8547 | 0.3709 |
| 0.2873 | 2.0 | 1500 | 0.7428 | 0.8860 | 0.2969 |
| 0.206 | 3.0 | 2250 | 0.7121 | 0.8594 | 0.4031 |
| 0.1319 | 4.0 | 3000 | 0.7368 | 0.8832 | 0.4616 |
| 0.0751 | 5.0 | 3750 | 0.6566 | 0.8020 | 1.1220 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
clinical-assistance/whisper_medium_clinical_assistance_10k | clinical-assistance | "2024-04-29T06:13:26Z" | 15 | 1 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"es",
"dataset:Mezosky/es_clinical_assistance_10k",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-04-29T02:50:50Z" | ---
language:
- es
license: apache-2.0
base_model: openai/whisper-medium
tags:
- generated_from_trainer
datasets:
- Mezosky/es_clinical_assistance_10k
metrics:
- wer
model-index:
- name: Whisper Chilean Spanish Medium
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Mezosky/es_clinical_assistance_10k
type: Mezosky/es_clinical_assistance_10k
metrics:
- name: Wer
type: wer
value: 7.774513918030494
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Chilean Spanish Medium
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Mezosky/es_clinical_assistance_10k dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1058
- Wer: 7.7745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.6275 | 0.17 | 100 | 0.5455 | 13.3333 |
| 0.185 | 0.34 | 200 | 0.1782 | 10.7316 |
| 0.1523 | 0.51 | 300 | 0.1539 | 10.9106 |
| 0.1373 | 0.69 | 400 | 0.1399 | 10.1329 |
| 0.1538 | 0.86 | 500 | 0.1322 | 17.5493 |
| 0.1007 | 1.03 | 600 | 0.1238 | 8.4963 |
| 0.0782 | 1.2 | 700 | 0.1187 | 8.4599 |
| 0.0722 | 1.37 | 800 | 0.1128 | 7.8137 |
| 0.0715 | 1.54 | 900 | 0.1081 | 7.6934 |
| 0.0927 | 1.72 | 1000 | 0.1058 | 7.7745 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
UKP-SQuARE/roberta-base-pf-race-onnx | UKP-SQuARE | "2023-01-03T21:42:59Z" | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"onnx",
"roberta",
"adapterhub:rc/race",
"en",
"dataset:race",
"arxiv:2104.08247",
"region:us"
] | null | "2023-01-03T21:39:49Z" | ---
inference: false
tags:
- onnx
- adapterhub:rc/race
- roberta
- adapter-transformers
datasets:
- race
language:
- en
---
# ONNX export of Adapter `AdapterHub/roberta-base-pf-race` for roberta-base
## Conversion of [AdapterHub/roberta-base-pf-race](https://huggingface.co/AdapterHub/roberta-base-pf-race) for UKP SQuARE
## Usage
```python
onnx_path = hf_hub_download(repo_id='UKP-SQuARE/roberta-base-pf-race-onnx', filename='model.onnx') # or model_quant.onnx for quantization
onnx_model = InferenceSession(onnx_path, providers=['CPUExecutionProvider'])
context = 'ONNX is an open format to represent models. The benefits of using ONNX include interoperability of frameworks and hardware optimization.'
question = 'What are advantages of ONNX?'
choices = ["Cat", "Horse", "Tiger", "Fish"]tokenizer = AutoTokenizer.from_pretrained('UKP-SQuARE/roberta-base-pf-race-onnx')
raw_input = [[context, question + + choice] for choice in choices]
inputs = tokenizer(raw_input, padding=True, truncation=True, return_tensors="np")
inputs['token_type_ids'] = np.expand_dims(inputs['token_type_ids'], axis=0)
inputs['input_ids'] = np.expand_dims(inputs['input_ids'], axis=0)
inputs['attention_mask'] = np.expand_dims(inputs['attention_mask'], axis=0)
outputs = onnx_model.run(input_feed=dict(inputs), output_names=None)
```
## Architecture & Training
The training code for this adapter is available at https://github.com/adapter-hub/efficient-task-transfer.
In particular, training configurations for all tasks can be found [here](https://github.com/adapter-hub/efficient-task-transfer/tree/master/run_configs).
## Evaluation results
Refer to [the paper](https://arxiv.org/pdf/2104.08247) for more information on results.
## Citation
If you use this adapter, please cite our paper ["What to Pre-Train on? Efficient Intermediate Task Selection"](https://arxiv.org/pdf/2104.08247):
```bibtex
@inproceedings{poth-etal-2021-what-to-pre-train-on,
title={What to Pre-Train on? Efficient Intermediate Task Selection},
author={Clifton Poth and Jonas Pfeiffer and Andreas Rücklé and Iryna Gurevych},
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/2104.08247",
pages = "to appear",
}
``` |
nhoxinh/471184e8-7acc-494a-9047-81e85decaab4 | nhoxinh | "2025-01-29T17:55:13Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"base_model:adapter:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-29T17:35:17Z" | ---
library_name: peft
license: llama3
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 471184e8-7acc-494a-9047-81e85decaab4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 7b9feae43db39685_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/7b9feae43db39685_train_data.json
type:
field_instruction: choices
field_output: question
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhoxinh/471184e8-7acc-494a-9047-81e85decaab4
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/7b9feae43db39685_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 63d22c32-c325-4eb3-9ac7-a5a784331233
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 63d22c32-c325-4eb3-9ac7-a5a784331233
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 471184e8-7acc-494a-9047-81e85decaab4
This model is a fine-tuned version of [WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0](https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0130
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.8335 | 0.1154 | 200 | 2.0130 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
SergioMer/distilbert-base-uncased-finetuned-emotion | SergioMer | "2024-05-31T13:43:25Z" | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-27T16:22:14Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.931
- name: F1
type: f1
value: 0.9309673999787987
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2067
- Accuracy: 0.931
- F1: 0.9310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7869 | 1.0 | 250 | 0.2859 | 0.917 | 0.9162 |
| 0.2353 | 2.0 | 500 | 0.2067 | 0.931 | 0.9310 |
### Framework versions
- Transformers 4.41.1
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
marianodo/MegaBatchMarginLoss-10 | marianodo | "2023-05-05T14:28:48Z" | 9 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2023-05-05T14:27:55Z" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 16 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MegaBatchMarginLoss.MegaBatchMarginLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 16,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
Subsets and Splits