Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
sequencelengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
null | mlx |
# mlx-community/Hermes-2-Pro-Llama-3-8B-4bit
This model was converted to MLX format from [`NousResearch/Hermes-2-Pro-Llama-3-8B`]() using mlx-lm version **0.12.0**.
Refer to the [original model card](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Hermes-2-Pro-Llama-3-8B-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "mlx"], "datasets": ["teknium/OpenHermes-2.5"], "base_model": "NousResearch/Meta-Llama-3-8B", "widget": [{"example_title": "Hermes 2 Pro", "messages": [{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."}, {"role": "user", "content": "Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world."}]}], "model-index": [{"name": "Hermes-2-Pro-Llama-3-8B", "results": []}]} | mlx-community/Hermes-2-Pro-Llama-3-8B-4bit | null | [
"mlx",
"safetensors",
"llama",
"Llama-3",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:NousResearch/Meta-Llama-3-8B",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T00:45:44+00:00 |
null | transformers | {} | magnifi/llama-cls-ner-mt-chat-v21-6_epoch_24-ct2 | null | [
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T00:46:41+00:00 |
|
text-to-image | diffusers | ### love666 Dreambooth model trained by kkk44 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
| {"license": "creativeml-openrail-m", "tags": ["text-to-image", "stable-diffusion"]} | kkk44/love666 | null | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | 2024-05-02T00:47:02+00:00 |
null | null | {} | eminAydin/gpt2_pretrain_fromScratch_v2 | null | [
"region:us"
] | null | 2024-05-02T00:47:54+00:00 |
|
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/abhinand/Llama-3-OpenBioMed-8B-dare-ties-v1.0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3-OpenBioMed-8B-dare-ties-v1.0-GGUF/resolve/main/Llama-3-OpenBioMed-8B-dare-ties-v1.0.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-OpenBioMed-8B-dare-ties-v1.0-GGUF/resolve/main/Llama-3-OpenBioMed-8B-dare-ties-v1.0.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-OpenBioMed-8B-dare-ties-v1.0-GGUF/resolve/main/Llama-3-OpenBioMed-8B-dare-ties-v1.0.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-OpenBioMed-8B-dare-ties-v1.0-GGUF/resolve/main/Llama-3-OpenBioMed-8B-dare-ties-v1.0.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-OpenBioMed-8B-dare-ties-v1.0-GGUF/resolve/main/Llama-3-OpenBioMed-8B-dare-ties-v1.0.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-OpenBioMed-8B-dare-ties-v1.0-GGUF/resolve/main/Llama-3-OpenBioMed-8B-dare-ties-v1.0.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-OpenBioMed-8B-dare-ties-v1.0-GGUF/resolve/main/Llama-3-OpenBioMed-8B-dare-ties-v1.0.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-OpenBioMed-8B-dare-ties-v1.0-GGUF/resolve/main/Llama-3-OpenBioMed-8B-dare-ties-v1.0.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-OpenBioMed-8B-dare-ties-v1.0-GGUF/resolve/main/Llama-3-OpenBioMed-8B-dare-ties-v1.0.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-OpenBioMed-8B-dare-ties-v1.0-GGUF/resolve/main/Llama-3-OpenBioMed-8B-dare-ties-v1.0.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-OpenBioMed-8B-dare-ties-v1.0-GGUF/resolve/main/Llama-3-OpenBioMed-8B-dare-ties-v1.0.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-OpenBioMed-8B-dare-ties-v1.0-GGUF/resolve/main/Llama-3-OpenBioMed-8B-dare-ties-v1.0.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-OpenBioMed-8B-dare-ties-v1.0-GGUF/resolve/main/Llama-3-OpenBioMed-8B-dare-ties-v1.0.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-OpenBioMed-8B-dare-ties-v1.0-GGUF/resolve/main/Llama-3-OpenBioMed-8B-dare-ties-v1.0.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3-OpenBioMed-8B-dare-ties-v1.0-GGUF/resolve/main/Llama-3-OpenBioMed-8B-dare-ties-v1.0.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["merge", "mergekit", "lazymergekit", "aaditya/Llama3-OpenBioLLM-8B", "johnsnowlabs/JSL-MedLlama-3-8B-v1.0", "winninghealth/WiNGPT2-Llama-3-8B-Base"], "base_model": "abhinand/Llama-3-OpenBioMed-8B-dare-ties-v1.0", "license_name": "llama3", "quantized_by": "mradermacher"} | mradermacher/Llama-3-OpenBioMed-8B-dare-ties-v1.0-GGUF | null | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"aaditya/Llama3-OpenBioLLM-8B",
"johnsnowlabs/JSL-MedLlama-3-8B-v1.0",
"winninghealth/WiNGPT2-Llama-3-8B-Base",
"en",
"base_model:abhinand/Llama-3-OpenBioMed-8B-dare-ties-v1.0",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T00:51:01+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | kyounghyun/llama-2-ko-7b-20240502 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-02T00:52:00+00:00 |
null | mlx |
# mlx-community/Hermes-2-Pro-Llama-3-8B-8bit
This model was converted to MLX format from [`NousResearch/Hermes-2-Pro-Llama-3-8B`]() using mlx-lm version **0.12.0**.
Refer to the [original model card](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B) for more details on the model.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/Hermes-2-Pro-Llama-3-8B-8bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "mlx"], "datasets": ["teknium/OpenHermes-2.5"], "base_model": "NousResearch/Meta-Llama-3-8B", "widget": [{"example_title": "Hermes 2 Pro", "messages": [{"role": "system", "content": "You are a sentient, superintelligent artificial general intelligence, here to teach and assist me."}, {"role": "user", "content": "Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world."}]}], "model-index": [{"name": "Hermes-2-Pro-Llama-3-8B", "results": []}]} | mlx-community/Hermes-2-Pro-Llama-3-8B-8bit | null | [
"mlx",
"safetensors",
"llama",
"Llama-3",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"axolotl",
"en",
"dataset:teknium/OpenHermes-2.5",
"base_model:NousResearch/Meta-Llama-3-8B",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T00:52:47+00:00 |
text-generation | transformers | Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement:
https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
ORPO fine tuning method using the following datasets:
- https://huggingface.co/datasets/Intel/orca_dpo_pairs
- https://huggingface.co/datasets/argilla/distilabel-math-preference-dpo
- https://huggingface.co/datasets/unalignment/toxic-dpo-v0.2
- https://huggingface.co/datasets/M4-ai/prm_dpo_pairs_cleaned
- https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1
Despite the toxic datasets to reduce refusals, this model is still relatively safe but refuses less than the original Meta model.
As of now ORPO fine tuning seems to improve some metrics while reducing other metrics by a lot:

We are happy for anyone to try it out and give some feedback and we will have the model up on https://awanllm.com if it is popular.
Instruct format:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>
{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
Quants: | {"license": "llama3"} | AwanLLM/Awanllm-Llama-3-8B-Instruct-ORPO-v0.1 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"license:llama3",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T00:54:18+00:00 |
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# final-ft__roberta-base-bne__70k-ultrasounds
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6164
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 259 | 1.1165 |
| No log | 2.0 | 518 | 0.9244 |
| No log | 3.0 | 777 | 0.8153 |
| 1.0516 | 4.0 | 1036 | 0.7842 |
| 1.0516 | 5.0 | 1295 | 0.7262 |
| 1.0516 | 6.0 | 1554 | 0.7235 |
| 1.0516 | 7.0 | 1813 | 0.6960 |
| 0.7109 | 8.0 | 2072 | 0.6787 |
| 0.7109 | 9.0 | 2331 | 0.6799 |
| 0.7109 | 10.0 | 2590 | 0.6718 |
| 0.7109 | 11.0 | 2849 | 0.6488 |
| 0.6385 | 12.0 | 3108 | 0.6426 |
| 0.6385 | 13.0 | 3367 | 0.6415 |
| 0.6385 | 14.0 | 3626 | 0.6232 |
| 0.6385 | 15.0 | 3885 | 0.6329 |
| 0.6042 | 16.0 | 4144 | 0.6036 |
| 0.6042 | 17.0 | 4403 | 0.6161 |
| 0.6042 | 18.0 | 4662 | 0.6193 |
| 0.6042 | 19.0 | 4921 | 0.6183 |
| 0.587 | 20.0 | 5180 | 0.6164 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "PlanTL-GOB-ES/roberta-base-bne", "model-index": [{"name": "final-ft__roberta-base-bne__70k-ultrasounds", "results": []}]} | manucos/final-ft__roberta-base-bne__70k-ultrasounds | null | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:PlanTL-GOB-ES/roberta-base-bne",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T00:55:28+00:00 |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TinyLlama-1.1B-Chat-v1.0 - bnb 4bits
- Model creator: https://huggingface.co/TinyLlama/
- Original model: https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0/
Original model description:
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
widget:
- example_title: Fibonacci (Python)
messages:
- role: system
content: You are a chatbot who can help code!
- role: user
content: Write me a function to calculate the first 10 digits of the fibonacci sequence in Python and print it out to the CLI.
---
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs ππ. The training has started on 2023-09-01.
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Model
This is the chat model finetuned on top of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T). **We follow [HF's Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha)'s training recipe.** The model was " initially fine-tuned on a variant of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.
We then further aligned the model with [π€ TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contain 64k prompts and model completions that are ranked by GPT-4."
#### How to use
You will need the transformers>=4.34
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="TinyLlama/TinyLlama-1.1B-Chat-v1.0", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
# <|system|>
# You are a friendly chatbot who always responds in the style of a pirate.</s>
# <|user|>
# How many helicopters can a human eat in one sitting?</s>
# <|assistant|>
# ...
```
| {} | RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-4bits | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | null | 2024-05-02T00:58:20+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth"]} | mrguz170/Aura | null | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T00:58:28+00:00 |
text-generation | transformers | # timpal0l/BeagleCatMunin2 AWQ
- Model creator: [timpal0l](https://huggingface.co/timpal0l)
- Original model: [BeagleCatMunin2](https://huggingface.co/timpal0l/BeagleCatMunin2)
## Model Summary
BeagleCatMunin2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [bineric/NorskGPT-Mistral-7b](https://huggingface.co/bineric/NorskGPT-Mistral-7b)
* [timpal0l/BeagleCatMunin](https://huggingface.co/timpal0l/BeagleCatMunin)
* [RJuro/munin-neuralbeagle-7b](https://huggingface.co/RJuro/munin-neuralbeagle-7b)
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/BeagleCatMunin2-AWQ"
system_message = "You are BeagleCatMunin2, incarnated as a powerful AI. You were created by timpal0l."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
| {"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible", "merge", "mergekit", "lazymergekit", "bineric/NorskGPT-Mistral-7b", "timpal0l/BeagleCatMunin", "RJuro/munin-neuralbeagle-7b"], "base_model": ["bineric/NorskGPT-Mistral-7b", "timpal0l/BeagleCatMunin", "RJuro/munin-neuralbeagle-7b"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/BeagleCatMunin2-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"merge",
"mergekit",
"lazymergekit",
"bineric/NorskGPT-Mistral-7b",
"timpal0l/BeagleCatMunin",
"RJuro/munin-neuralbeagle-7b",
"base_model:bineric/NorskGPT-Mistral-7b",
"base_model:timpal0l/BeagleCatMunin",
"base_model:RJuro/munin-neuralbeagle-7b",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T00:59:16+00:00 |
text-generation | transformers | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TinyLlama-1.1B-Chat-v1.0 - bnb 8bits
- Model creator: https://huggingface.co/TinyLlama/
- Original model: https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0/
Original model description:
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
widget:
- example_title: Fibonacci (Python)
messages:
- role: system
content: You are a chatbot who can help code!
- role: user
content: Write me a function to calculate the first 10 digits of the fibonacci sequence in Python and print it out to the CLI.
---
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs ππ. The training has started on 2023-09-01.
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Model
This is the chat model finetuned on top of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T). **We follow [HF's Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha)'s training recipe.** The model was " initially fine-tuned on a variant of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.
We then further aligned the model with [π€ TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contain 64k prompts and model completions that are ranked by GPT-4."
#### How to use
You will need the transformers>=4.34
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="TinyLlama/TinyLlama-1.1B-Chat-v1.0", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
# <|system|>
# You are a friendly chatbot who always responds in the style of a pirate.</s>
# <|user|>
# How many helicopters can a human eat in one sitting?</s>
# <|assistant|>
# ...
```
| {} | RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-8bits | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"8-bit",
"region:us"
] | null | 2024-05-02T00:59:43+00:00 |
null | null | {"license": "openrail"} | victorluan/datasetcorleone | null | [
"license:openrail",
"region:us"
] | null | 2024-05-02T01:00:46+00:00 |
|
null | null | {} | isferi1996/my-en-fr-output | null | [
"region:us"
] | null | 2024-05-02T01:01:49+00:00 |
|
fill-mask | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# final-ft__bert-base-spanish-wwm-cased__70k-ultrasounds
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5227
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 276 | 0.9156 |
| No log | 2.0 | 552 | 0.7658 |
| No log | 3.0 | 828 | 0.7083 |
| 0.9708 | 4.0 | 1104 | 0.6710 |
| 0.9708 | 5.0 | 1380 | 0.6380 |
| 0.9708 | 6.0 | 1656 | 0.6119 |
| 0.9708 | 7.0 | 1932 | 0.6025 |
| 0.6612 | 8.0 | 2208 | 0.5752 |
| 0.6612 | 9.0 | 2484 | 0.5708 |
| 0.6612 | 10.0 | 2760 | 0.5680 |
| 0.6612 | 11.0 | 3036 | 0.5425 |
| 0.6013 | 12.0 | 3312 | 0.5635 |
| 0.6013 | 13.0 | 3588 | 0.5386 |
| 0.6013 | 14.0 | 3864 | 0.5476 |
| 0.6013 | 15.0 | 4140 | 0.5458 |
| 0.5675 | 16.0 | 4416 | 0.5345 |
| 0.5675 | 17.0 | 4692 | 0.5350 |
| 0.5675 | 18.0 | 4968 | 0.5280 |
| 0.5675 | 19.0 | 5244 | 0.5260 |
| 0.5549 | 20.0 | 5520 | 0.5227 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"tags": ["generated_from_trainer"], "base_model": "dccuchile/bert-base-spanish-wwm-cased", "model-index": [{"name": "final-ft__bert-base-spanish-wwm-cased__70k-ultrasounds", "results": []}]} | manucos/final-ft__bert-base-spanish-wwm-cased__70k-ultrasounds | null | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"base_model:dccuchile/bert-base-spanish-wwm-cased",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T01:02:00+00:00 |
null | null | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
TinyLlama-1.1B-Chat-v1.0 - GGUF
- Model creator: https://huggingface.co/TinyLlama/
- Original model: https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [TinyLlama-1.1B-Chat-v1.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/TinyLlama-1.1B-Chat-v1.0.Q2_K.gguf) | Q2_K | 0.4GB |
| [TinyLlama-1.1B-Chat-v1.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/TinyLlama-1.1B-Chat-v1.0.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [TinyLlama-1.1B-Chat-v1.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/TinyLlama-1.1B-Chat-v1.0.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [TinyLlama-1.1B-Chat-v1.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/TinyLlama-1.1B-Chat-v1.0.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [TinyLlama-1.1B-Chat-v1.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/TinyLlama-1.1B-Chat-v1.0.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [TinyLlama-1.1B-Chat-v1.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/TinyLlama-1.1B-Chat-v1.0.Q3_K.gguf) | Q3_K | 0.51GB |
| [TinyLlama-1.1B-Chat-v1.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/TinyLlama-1.1B-Chat-v1.0.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [TinyLlama-1.1B-Chat-v1.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/TinyLlama-1.1B-Chat-v1.0.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [TinyLlama-1.1B-Chat-v1.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/TinyLlama-1.1B-Chat-v1.0.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [TinyLlama-1.1B-Chat-v1.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/TinyLlama-1.1B-Chat-v1.0.Q4_0.gguf) | Q4_0 | 0.59GB |
| [TinyLlama-1.1B-Chat-v1.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/TinyLlama-1.1B-Chat-v1.0.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [TinyLlama-1.1B-Chat-v1.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/TinyLlama-1.1B-Chat-v1.0.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [TinyLlama-1.1B-Chat-v1.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/TinyLlama-1.1B-Chat-v1.0.Q4_K.gguf) | Q4_K | 0.62GB |
| [TinyLlama-1.1B-Chat-v1.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/TinyLlama-1.1B-Chat-v1.0.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [TinyLlama-1.1B-Chat-v1.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/TinyLlama-1.1B-Chat-v1.0.Q4_1.gguf) | Q4_1 | 0.65GB |
| [TinyLlama-1.1B-Chat-v1.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/TinyLlama-1.1B-Chat-v1.0.Q5_0.gguf) | Q5_0 | 0.71GB |
| [TinyLlama-1.1B-Chat-v1.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/TinyLlama-1.1B-Chat-v1.0.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [TinyLlama-1.1B-Chat-v1.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/TinyLlama-1.1B-Chat-v1.0.Q5_K.gguf) | Q5_K | 0.73GB |
| [TinyLlama-1.1B-Chat-v1.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/TinyLlama-1.1B-Chat-v1.0.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [TinyLlama-1.1B-Chat-v1.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/TinyLlama-1.1B-Chat-v1.0.Q5_1.gguf) | Q5_1 | 0.77GB |
| [TinyLlama-1.1B-Chat-v1.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/TinyLlama-1.1B-Chat-v1.0.Q6_K.gguf) | Q6_K | 0.84GB |
Original model description:
---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- HuggingFaceH4/ultrachat_200k
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
widget:
- example_title: Fibonacci (Python)
messages:
- role: system
content: You are a chatbot who can help code!
- role: user
content: Write me a function to calculate the first 10 digits of the fibonacci sequence in Python and print it out to the CLI.
---
<div align="center">
# TinyLlama-1.1B
</div>
https://github.com/jzhang38/TinyLlama
The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs ππ. The training has started on 2023-09-01.
We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
#### This Model
This is the chat model finetuned on top of [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T). **We follow [HF's Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha)'s training recipe.** The model was " initially fine-tuned on a variant of the [`UltraChat`](https://huggingface.co/datasets/stingning/ultrachat) dataset, which contains a diverse range of synthetic dialogues generated by ChatGPT.
We then further aligned the model with [π€ TRL's](https://github.com/huggingface/trl) `DPOTrainer` on the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, which contain 64k prompts and model completions that are ranked by GPT-4."
#### How to use
You will need the transformers>=4.34
Do check the [TinyLlama](https://github.com/jzhang38/TinyLlama) github page for more information.
```python
# Install transformers from source - only needed for versions <= v4.34
# pip install git+https://github.com/huggingface/transformers.git
# pip install accelerate
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="TinyLlama/TinyLlama-1.1B-Chat-v1.0", torch_dtype=torch.bfloat16, device_map="auto")
# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
{
"role": "system",
"content": "You are a friendly chatbot who always responds in the style of a pirate",
},
{"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
# <|system|>
# You are a friendly chatbot who always responds in the style of a pirate.</s>
# <|user|>
# How many helicopters can a human eat in one sitting?</s>
# <|assistant|>
# ...
```
| {} | RichardErkhov/TinyLlama_-_TinyLlama-1.1B-Chat-v1.0-gguf | null | [
"gguf",
"region:us"
] | null | 2024-05-02T01:02:33+00:00 |
text-generation | transformers | {"license": "mit"} | Minwoo-Study/AIHUB_IndonesianCorpus_Generation | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:03:57+00:00 |
|
text-generation | transformers | {} | waelChafei/llama2-summarization | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:06:01+00:00 |
|
text-to-image | diffusers | {} | Moibe/asianportrait | null | [
"diffusers",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | null | 2024-05-02T01:06:07+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-70m_mz-133_EnronSpam_n-its-10-seed-3
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-70m", "model-index": [{"name": "robust_llm_pythia-70m_mz-133_EnronSpam_n-its-10-seed-3", "results": []}]} | AlignmentResearch/robust_llm_pythia-70m_mz-133_EnronSpam_n-its-10-seed-3 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:06:12+00:00 |
feature-extraction | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | khairi/esm2_t6_8M_UR50D | null | [
"transformers",
"safetensors",
"esm",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T01:06:14+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-70m_mz-133_EnronSpam_n-its-10-seed-1
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-70m", "model-index": [{"name": "robust_llm_pythia-70m_mz-133_EnronSpam_n-its-10-seed-1", "results": []}]} | AlignmentResearch/robust_llm_pythia-70m_mz-133_EnronSpam_n-its-10-seed-1 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:06:24+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-70m_mz-133_EnronSpam_n-its-10-seed-0
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 0
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-70m", "model-index": [{"name": "robust_llm_pythia-70m_mz-133_EnronSpam_n-its-10-seed-0", "results": []}]} | AlignmentResearch/robust_llm_pythia-70m_mz-133_EnronSpam_n-its-10-seed-0 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:06:30+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-70m_mz-133_EnronSpam_n-its-10-seed-2
This model is a fine-tuned version of [EleutherAI/pythia-70m](https://huggingface.co/EleutherAI/pythia-70m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-70m", "model-index": [{"name": "robust_llm_pythia-70m_mz-133_EnronSpam_n-its-10-seed-2", "results": []}]} | AlignmentResearch/robust_llm_pythia-70m_mz-133_EnronSpam_n-its-10-seed-2 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-70m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:06:41+00:00 |
sentence-similarity | sentence-transformers |
# eunyounglee/EEVE-LLM2VEC-MNTP-STS-epoch3
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 4096 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('eunyounglee/EEVE-LLM2VEC-MNTP-STS-epoch3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('eunyounglee/EEVE-LLM2VEC-MNTP-STS-epoch3')
model = AutoModel.from_pretrained('eunyounglee/EEVE-LLM2VEC-MNTP-STS-epoch3')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=eunyounglee/EEVE-LLM2VEC-MNTP-STS-epoch3)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1167 with parameters:
```
{'batch_size': 1, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 351,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 4096, 'do_lower_case': False}) with Transformer model: LlamaModel
(1): Pooling({'word_embedding_dimension': 4096, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | {"library_name": "sentence-transformers", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "pipeline_tag": "sentence-similarity"} | eunyounglee/EEVE-LLM2VEC-MNTP-STS-epoch1 | null | [
"sentence-transformers",
"safetensors",
"llama",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T01:06:41+00:00 |
text-generation | transformers | {} | awilliamson/derby-fft-of | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:08:31+00:00 |
|
null | transformers |
# Uploaded model
- **Developed by:** HHHHYYYYIIII
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "llama", "trl"], "base_model": "unsloth/llama-3-8b-bnb-4bit"} | HHHHYYYYIIII/lora_llama3HYYYYYYIIII | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T01:08:42+00:00 |
feature-extraction | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | khairi/esm2_t33_650M_UR50D | null | [
"transformers",
"safetensors",
"esm",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T01:08:45+00:00 |
text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0_withdpo_4iters_bs256_531lr_iter_4
This model is a fine-tuned version of [ShenaoZ/0.0_withdpo_4iters_bs256_531lr_iter_3](https://huggingface.co/ShenaoZ/0.0_withdpo_4iters_bs256_531lr_iter_3) on the updated and the original datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo", "generated_from_trainer"], "datasets": ["updated", "original"], "base_model": "ShenaoZ/0.0_withdpo_4iters_bs256_531lr_iter_3", "model-index": [{"name": "0.0_withdpo_4iters_bs256_531lr_iter_4", "results": []}]} | ShenaoZ/0.0_withdpo_4iters_bs256_531lr_iter_4 | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"dataset:updated",
"dataset:original",
"base_model:ShenaoZ/0.0_withdpo_4iters_bs256_531lr_iter_3",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:09:15+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"library_name": "transformers", "tags": []} | 12thD/ko-Llama-3-8B-sft-v0.2 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:10:24+00:00 |
text-generation | transformers | # timpal0l/Mistral-7B-v0.1-flashback-v2 AWQ
- Model creator: [timpal0l](https://huggingface.co/timpal0l)
- Original model: [Mistral-7B-v0.1-flashback-v2](https://huggingface.co/timpal0l/Mistral-7B-v0.1-flashback-v2)
## How to use
### Install the necessary packages
```bash
pip install --upgrade autoawq autoawq-kernels
```
### Example Python code
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer, TextStreamer
model_path = "solidrust/Mistral-7B-v0.1-flashback-v2-AWQ"
system_message = "You are Mistral-7B-v0.1-flashback-v2, incarnated as a powerful AI. You were created by timpal0l."
# Load model
model = AutoAWQForCausalLM.from_quantized(model_path,
fuse_layers=True)
tokenizer = AutoTokenizer.from_pretrained(model_path,
trust_remote_code=True)
streamer = TextStreamer(tokenizer,
skip_prompt=True,
skip_special_tokens=True)
# Convert prompt to tokens
prompt_template = """\
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant"""
prompt = "You're standing on the surface of the Earth. "\
"You walk one mile south, one mile west and one mile north. "\
"You end up exactly where you started. Where are you?"
tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt),
return_tensors='pt').input_ids.cuda()
# Generate output
generation_output = model.generate(tokens,
streamer=streamer,
max_new_tokens=512)
```
### About AWQ
AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings.
AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead.
It is supported by:
- [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ
- [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types.
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
- [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers
- [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code
| {"library_name": "transformers", "tags": ["4-bit", "AWQ", "text-generation", "autotrain_compatible", "endpoints_compatible"], "pipeline_tag": "text-generation", "inference": false, "quantized_by": "Suparious"} | solidrust/Mistral-7B-v0.1-flashback-v2-AWQ | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"4-bit",
"AWQ",
"autotrain_compatible",
"endpoints_compatible",
"conversational",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:11:05+00:00 |
null | null | {"license": "mit"} | paigevegna/crypticToT | null | [
"license:mit",
"region:us"
] | null | 2024-05-02T01:12:46+00:00 |
|
null | null |
# Multiverseex26T3q-7B
Multiverseex26T3q-7B is an automated merge created by [Maxime Labonne](https://huggingface.co/mlabonne) using the following configuration.
## π§© Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
- model: allknowingroger/MultiverseEx26-7B-slerp
- model: chihoonlee10/T3Q-Mistral-Orca-Math-DPO
merge_method: model_stock
base_model: mistralai/Mistral-7B-v0.1
dtype: bfloat16
```
## π» Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "automerger/Multiverseex26T3q-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "automerger"]} | automerger/Multiverseex26T3q-7B | null | [
"merge",
"mergekit",
"lazymergekit",
"automerger",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T01:12:46+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| {"language": ["ko", "en"], "license": "other", "library_name": "transformers", "tags": ["korean", "gemma"], "license_name": "meta-llama-3-community-license-agreement", "pipeline_tag": "text-generation", "base_model": "meta-llama/Meta-Llama-3-8B"} | 12thD/ko-Llama-3-8B-sft-v0.3 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"korean",
"gemma",
"conversational",
"ko",
"en",
"arxiv:1910.09700",
"base_model:meta-llama/Meta-Llama-3-8B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:14:40+00:00 |
null | null | {"license": "openrail"} | Adnan1361/ddfgth54343 | null | [
"license:openrail",
"region:us"
] | null | 2024-05-02T01:14:48+00:00 |
|
null | null | {} | Greko89/AlexisFido | null | [
"region:us"
] | null | 2024-05-02T01:15:47+00:00 |
|
null | transformers |
# Uploaded model
- **Developed by:** dbands
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl"], "base_model": "unsloth/Phi-3-mini-4k-instruct-bnb-4bit"} | dbands/Phi-3-mini-4k-instruct-orca-math-word-problems-200k-model | null | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T01:15:49+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": ["unsloth"]} | dbands/Phi-3-mini-4k-instruct-orca-math-word-problems-200k-lora_model | null | [
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T01:16:09+00:00 |
text-generation | transformers |
# Uploaded model
- **Developed by:** dbands
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
| {"language": ["en"], "license": "apache-2.0", "tags": ["text-generation-inference", "transformers", "unsloth", "mistral", "trl", "sft"], "base_model": "unsloth/Phi-3-mini-4k-instruct-bnb-4bit"} | dbands/Phi-3-mini-4k-instruct-orca-math-word-problems-200k-model-16bit | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T01:16:16+00:00 |
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/Alphacode-AI/Alphallama3-8B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Alphallama3-8B-GGUF/resolve/main/Alphallama3-8B.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "library_name": "transformers", "base_model": "Alphacode-AI/Alphallama3-8B", "quantized_by": "mradermacher"} | mradermacher/Alphallama3-8B-GGUF | null | [
"transformers",
"gguf",
"en",
"base_model:Alphacode-AI/Alphallama3-8B",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T01:16:41+00:00 |
null | null | {"license": "openrail"} | GoldoBasic/tvstatic | null | [
"license:openrail",
"region:us"
] | null | 2024-05-02T01:17:59+00:00 |
|
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_train_InstructionN3_SOAPL_v1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_InstructionN3_SOAPL_v1", "results": []}]} | ThuyNT/CS505_COQE_viT5_train_InstructionN3_SOAPL_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:18:25+00:00 |
text-generation | transformers |
# phillama-prune20
phillama-prune20 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [raincandy-u/phillama-3.8b-v1](https://huggingface.co/raincandy-u/phillama-3.8b-v1)
* [raincandy-u/phillama-3.8b-v1](https://huggingface.co/raincandy-u/phillama-3.8b-v1)
## π§© Configuration
```yaml
slices:
- sources:
- model: raincandy-u/phillama-3.8b-v1
layer_range: [0, 18]
- sources:
- model: raincandy-u/phillama-3.8b-v1
layer_range: [21, 32]
merge_method: passthrough
dtype: bfloat16
```
## π» Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "aipib/phillama-prune20"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` | {"tags": ["merge", "mergekit", "lazymergekit", "raincandy-u/phillama-3.8b-v1"], "base_model": ["raincandy-u/phillama-3.8b-v1", "raincandy-u/phillama-3.8b-v1"]} | aipib/phillama-prune20 | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"raincandy-u/phillama-3.8b-v1",
"conversational",
"base_model:raincandy-u/phillama-3.8b-v1",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:18:26+00:00 |
null | null | How to use:
First, Use "!h1-" or any other header 1-6 and then followed by your text.
For example:
!h2-huggingface.co
Now, save it as a .txt file and open it in the notepad-h-conv [notemsk.html] module repo.
Now you should see some headers! | {"language": ["en"], "license": "apache-2.0"} | derckr949/notepad-h-conv | null | [
"en",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T01:18:50+00:00 |
null | null |
# joeshmoethefunnyone/pythia-410m-Q4_K_M-GGUF
This model was converted to GGUF format from [`EleutherAI/pythia-410m`](https://huggingface.co/EleutherAI/pythia-410m) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/EleutherAI/pythia-410m) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo joeshmoethefunnyone/pythia-410m-Q4_K_M-GGUF --model pythia-410m.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo joeshmoethefunnyone/pythia-410m-Q4_K_M-GGUF --model pythia-410m.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m pythia-410m.Q4_K_M.gguf -n 128
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["pytorch", "causal-lm", "pythia", "llama-cpp", "gguf-my-repo"], "datasets": ["EleutherAI/pile"]} | joeshmoethefunnyone/pythia-410m-Q4_K_M-GGUF | null | [
"gguf",
"pytorch",
"causal-lm",
"pythia",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:EleutherAI/pile",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T01:19:35+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_train_InstructionN4_SOAPL_v1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_InstructionN4_SOAPL_v1", "results": []}]} | ThuyNT/CS505_COQE_viT5_train_InstructionN4_SOAPL_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:19:40+00:00 |
null | null | {} | ktanku/Day6 | null | [
"region:us"
] | null | 2024-05-02T01:20:10+00:00 |
|
null | null |
# joeshmoethefunnyone/pythia-410m-Q3_K_M-GGUF
This model was converted to GGUF format from [`EleutherAI/pythia-410m`](https://huggingface.co/EleutherAI/pythia-410m) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/EleutherAI/pythia-410m) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo joeshmoethefunnyone/pythia-410m-Q3_K_M-GGUF --model pythia-410m.Q3_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo joeshmoethefunnyone/pythia-410m-Q3_K_M-GGUF --model pythia-410m.Q3_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m pythia-410m.Q3_K_M.gguf -n 128
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["pytorch", "causal-lm", "pythia", "llama-cpp", "gguf-my-repo"], "datasets": ["EleutherAI/pile"]} | joeshmoethefunnyone/pythia-410m-Q3_K_M-GGUF | null | [
"gguf",
"pytorch",
"causal-lm",
"pythia",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:EleutherAI/pile",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T01:20:40+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nash_dpo_rank4_iter_3
This model is a fine-tuned version of [YYYYYYibo/nash_dpo_rank4_iter_2](https://huggingface.co/YYYYYYibo/nash_dpo_rank4_iter_2) on the updated and the original datasets.
It achieves the following results on the evaluation set:
- Loss: 0.5471
- Rewards/chosen: -0.3098
- Rewards/rejected: -0.8270
- Rewards/accuracies: 0.7140
- Rewards/margins: 0.5173
- Logps/rejected: -380.2009
- Logps/chosen: -346.7321
- Logits/rejected: -0.5385
- Logits/chosen: -0.9596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.5347 | 0.49 | 100 | 0.5570 | -0.3268 | -0.8259 | 0.7120 | 0.4991 | -380.0872 | -348.4370 | -0.5705 | -0.9798 |
| 0.5154 | 0.98 | 200 | 0.5471 | -0.3098 | -0.8270 | 0.7140 | 0.5173 | -380.2009 | -346.7321 | -0.5385 | -0.9596 |
### Framework versions
- PEFT 0.7.1
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.15.2 | {"license": "apache-2.0", "library_name": "peft", "tags": ["alignment-handbook", "generated_from_trainer", "trl", "dpo"], "datasets": ["updated", "original"], "base_model": "alignment-handbook/zephyr-7b-sft-full", "model-index": [{"name": "nash_dpo_rank4_iter_3", "results": []}]} | YYYYYYibo/nash_dpo_rank4_iter_3 | null | [
"peft",
"safetensors",
"mistral",
"alignment-handbook",
"generated_from_trainer",
"trl",
"dpo",
"dataset:updated",
"dataset:original",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T01:22:31+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_train_InstructionN2_SOAPL_v1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_InstructionN2_SOAPL_v1", "results": []}]} | ThuyNT/CS505_COQE_viT5_train_InstructionN2_SOAPL_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:23:15+00:00 |
null | null | {"license": "openrail"} | Adnan1361/monib | null | [
"license:openrail",
"region:us"
] | null | 2024-05-02T01:23:48+00:00 |
|
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | abc88767/model36 | null | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T01:28:02+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# your_model_checkpoint-finetuned-your_task
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8249
- Matthews Correlation: 0.5196
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4889 | 1.0 | 1069 | 0.4690 | 0.4181 |
| 0.3321 | 2.0 | 2138 | 0.5897 | 0.5184 |
| 0.2233 | 3.0 | 3207 | 0.8249 | 0.5196 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["matthews_correlation"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "your_model_checkpoint-finetuned-your_task", "results": []}]} | hanzla107/your_model_checkpoint-finetuned-your_task | null | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T01:28:45+00:00 |
feature-extraction | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | andersonbcdefg/tiny-emb-2024-05-02_01-29-42 | null | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T01:29:42+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-2
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-160m", "model-index": [{"name": "robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-2", "results": []}]} | AlignmentResearch/robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-2 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:29:42+00:00 |
null | null | {} | Xrunner/hive-q | null | [
"region:us"
] | null | 2024-05-02T01:30:32+00:00 |
|
text-generation | transformers | {"license": "mit"} | coming-san-fran/test-gpt-2 | null | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:31:35+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-3
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-160m", "model-index": [{"name": "robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-3", "results": []}]} | AlignmentResearch/robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-3 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:32:44+00:00 |
null | null | {} | brittlewis12/Hermes-2-Pro-Llama-3-8B-GGUF | null | [
"gguf",
"region:us"
] | null | 2024-05-02T01:33:45+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-4
This model is a fine-tuned version of [EleutherAI/pythia-160m](https://huggingface.co/EleutherAI/pythia-160m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.1
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "EleutherAI/pythia-160m", "model-index": [{"name": "robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-4", "results": []}]} | AlignmentResearch/robust_llm_pythia-160m_mz-133_EnronSpam_n-its-10-seed-4 | null | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-160m",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:34:21+00:00 |
text-generation | null |
# bkz11/Phi-3-mini-128k-instruct-Q4_0-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3-mini-128k-instruct`](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo bkz11/Phi-3-mini-128k-instruct-Q4_0-GGUF --model phi-3-mini-128k-instruct.Q4_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo bkz11/Phi-3-mini-128k-instruct-Q4_0-GGUF --model phi-3-mini-128k-instruct.Q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m phi-3-mini-128k-instruct.Q4_0.gguf -n 128
```
| {"language": ["en"], "license": "mit", "tags": ["nlp", "code", "llama-cpp", "gguf-my-repo"], "license_link": "https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE", "pipeline_tag": "text-generation", "widget": [{"messages": [{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}]}]} | bkz11/Phi-3-mini-128k-instruct-Q4_0-GGUF | null | [
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"license:mit",
"region:us"
] | null | 2024-05-02T01:36:21+00:00 |
text-classification | setfit |
# SetFit Aspect Model with sentence-transformers/all-MiniLM-L6-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of filtering aspect span candidates.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. **Use this SetFit model to filter these possible aspect span candidates.**
3. Use a SetFit model to classify the filtered aspect span candidates.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **spaCy Model:** en_core_web_sm
- **SetFitABSA Aspect Model:** [marcelomoreno26/all-MiniLM-L6-v2-absa-aspect2](https://huggingface.co/marcelomoreno26/all-MiniLM-L6-v2-absa-aspect2)
- **SetFitABSA Polarity Model:** [setfit-absa-polarity](https://huggingface.co/setfit-absa-polarity)
- **Maximum Sequence Length:** 256 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:----------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| aspect | <ul><li>'cord:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'battery life:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'service center:The tech guy then said the service center does not do 1-to-1 exchange and I have to direct my concern to the "sales" team, which is the retail shop which I bought my netbook from.'</li></ul> |
| no aspect | <ul><li>'night:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'skip:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'exchange:The tech guy then said the service center does not do 1-to-1 exchange and I have to direct my concern to the "sales" team, which is the retail shop which I bought my netbook from.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8948 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import AbsaModel
# Download from the π€ Hub
model = AbsaModel.from_pretrained(
"marcelomoreno26/all-MiniLM-L6-v2-absa-aspect2",
"setfit-absa-polarity",
)
# Run inference
preds = model("The food was great, but the venue is just way too busy.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 2 | 21.9670 | 75 |
| Label | Training Sample Count |
|:----------|:----------------------|
| no aspect | 690 |
| aspect | 644 |
### Training Hyperparameters
- batch_size: (16, 2)
- num_epochs: (1, 16)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0000 | 1 | 0.3662 | - |
| 0.0015 | 50 | 0.3374 | - |
| 0.0029 | 100 | 0.3411 | - |
| 0.0044 | 150 | 0.2945 | - |
| 0.0059 | 200 | 0.2944 | - |
| 0.0073 | 250 | 0.2942 | - |
| 0.0088 | 300 | 0.2409 | - |
| 0.0103 | 350 | 0.2817 | - |
| 0.0118 | 400 | 0.3149 | - |
| 0.0132 | 450 | 0.2618 | - |
| 0.0147 | 500 | 0.247 | - |
| 0.0162 | 550 | 0.2883 | - |
| 0.0176 | 600 | 0.2783 | - |
| 0.0191 | 650 | 0.2418 | - |
| 0.0206 | 700 | 0.2938 | - |
| 0.0220 | 750 | 0.2376 | - |
| 0.0235 | 800 | 0.2652 | - |
| 0.0250 | 850 | 0.2442 | - |
| 0.0265 | 900 | 0.2678 | - |
| 0.0279 | 950 | 0.2216 | - |
| 0.0294 | 1000 | 0.1816 | - |
| 0.0309 | 1050 | 0.1102 | - |
| 0.0323 | 1100 | 0.2985 | - |
| 0.0338 | 1150 | 0.1124 | - |
| 0.0353 | 1200 | 0.1075 | - |
| 0.0367 | 1250 | 0.0819 | - |
| 0.0382 | 1300 | 0.1238 | - |
| 0.0397 | 1350 | 0.0529 | - |
| 0.0412 | 1400 | 0.026 | - |
| 0.0426 | 1450 | 0.0289 | - |
| 0.0441 | 1500 | 0.067 | - |
| 0.0456 | 1550 | 0.0276 | - |
| 0.0470 | 1600 | 0.0162 | - |
| 0.0485 | 1650 | 0.0083 | - |
| 0.0500 | 1700 | 0.0017 | - |
| 0.0514 | 1750 | 0.0028 | - |
| 0.0529 | 1800 | 0.0045 | - |
| 0.0544 | 1850 | 0.0022 | - |
| 0.0558 | 1900 | 0.0014 | - |
| 0.0573 | 1950 | 0.0059 | - |
| 0.0588 | 2000 | 0.0019 | - |
| 0.0603 | 2050 | 0.0014 | - |
| 0.0617 | 2100 | 0.0022 | - |
| 0.0632 | 2150 | 0.0005 | - |
| 0.0647 | 2200 | 0.0008 | - |
| 0.0661 | 2250 | 0.0005 | - |
| 0.0676 | 2300 | 0.0006 | - |
| 0.0691 | 2350 | 0.0003 | - |
| 0.0705 | 2400 | 0.0007 | - |
| 0.0720 | 2450 | 0.0005 | - |
| 0.0735 | 2500 | 0.0005 | - |
| 0.0750 | 2550 | 0.0612 | - |
| 0.0764 | 2600 | 0.0004 | - |
| 0.0779 | 2650 | 0.041 | - |
| 0.0794 | 2700 | 0.0002 | - |
| 0.0808 | 2750 | 0.0003 | - |
| 0.0823 | 2800 | 0.0002 | - |
| 0.0838 | 2850 | 0.0002 | - |
| 0.0852 | 2900 | 0.0002 | - |
| 0.0867 | 2950 | 0.0004 | - |
| 0.0882 | 3000 | 0.0006 | - |
| 0.0897 | 3050 | 0.0601 | - |
| 0.0911 | 3100 | 0.0002 | - |
| 0.0926 | 3150 | 0.0108 | - |
| 0.0941 | 3200 | 0.0003 | - |
| 0.0955 | 3250 | 0.0363 | - |
| 0.0970 | 3300 | 0.0006 | - |
| 0.0985 | 3350 | 0.0002 | - |
| 0.0999 | 3400 | 0.0033 | - |
| 0.1014 | 3450 | 0.0002 | - |
| 0.1029 | 3500 | 0.0002 | - |
| 0.1044 | 3550 | 0.0006 | - |
| 0.1058 | 3600 | 0.0002 | - |
| 0.1073 | 3650 | 0.0002 | - |
| 0.1088 | 3700 | 0.0001 | - |
| 0.1102 | 3750 | 0.0002 | - |
| 0.1117 | 3800 | 0.0002 | - |
| 0.1132 | 3850 | 0.0004 | - |
| 0.1146 | 3900 | 0.0003 | - |
| 0.1161 | 3950 | 0.0001 | - |
| 0.1176 | 4000 | 0.0004 | - |
| 0.1190 | 4050 | 0.0003 | - |
| 0.1205 | 4100 | 0.001 | - |
| 0.1220 | 4150 | 0.0002 | - |
| 0.1235 | 4200 | 0.0001 | - |
| 0.1249 | 4250 | 0.0003 | - |
| 0.1264 | 4300 | 0.0003 | - |
| 0.1279 | 4350 | 0.0002 | - |
| 0.1293 | 4400 | 0.0001 | - |
| 0.1308 | 4450 | 0.0001 | - |
| 0.1323 | 4500 | 0.0001 | - |
| 0.1337 | 4550 | 0.0001 | - |
| 0.1352 | 4600 | 0.0001 | - |
| 0.1367 | 4650 | 0.0003 | - |
| 0.1382 | 4700 | 0.0006 | - |
| 0.1396 | 4750 | 0.0003 | - |
| 0.1411 | 4800 | 0.0001 | - |
| 0.1426 | 4850 | 0.0011 | - |
| 0.1440 | 4900 | 0.0001 | - |
| 0.1455 | 4950 | 0.0001 | - |
| 0.1470 | 5000 | 0.0001 | - |
| 0.1484 | 5050 | 0.0001 | - |
| 0.1499 | 5100 | 0.0002 | - |
| 0.1514 | 5150 | 0.0497 | - |
| 0.1529 | 5200 | 0.0002 | - |
| 0.1543 | 5250 | 0.0001 | - |
| 0.1558 | 5300 | 0.0008 | - |
| 0.1573 | 5350 | 0.0001 | - |
| 0.1587 | 5400 | 0.0002 | - |
| 0.1602 | 5450 | 0.0001 | - |
| 0.1617 | 5500 | 0.0003 | - |
| 0.1631 | 5550 | 0.0003 | - |
| 0.1646 | 5600 | 0.0004 | - |
| 0.1661 | 5650 | 0.0002 | - |
| 0.1675 | 5700 | 0.0002 | - |
| 0.1690 | 5750 | 0.0001 | - |
| 0.1705 | 5800 | 0.0001 | - |
| 0.1720 | 5850 | 0.0001 | - |
| 0.1734 | 5900 | 0.0004 | - |
| 0.1749 | 5950 | 0.0001 | - |
| 0.1764 | 6000 | 0.0001 | - |
| 0.1778 | 6050 | 0.0001 | - |
| 0.125 | 1 | 0.0002 | - |
| 0.5 | 4 | 0.0003 | - |
| 1.0 | 8 | 0.0 | - |
| 0.0000 | 1 | 0.0001 | - |
| 0.0015 | 50 | 0.0001 | - |
| 0.0029 | 100 | 0.0 | - |
| 0.0044 | 150 | 0.0001 | - |
| 0.125 | 1 | 0.0 | - |
| 0.5 | 4 | 0.0 | - |
| 0.0000 | 1 | 0.0003 | - |
| 0.0009 | 50 | 0.0003 | - |
| 0.0018 | 100 | 0.0003 | - |
| 0.0027 | 150 | 0.0001 | - |
| 0.0036 | 200 | 0.0001 | - |
| 0.0045 | 250 | 0.1015 | - |
| 0.0054 | 300 | 0.0005 | - |
| 0.0063 | 350 | 0.0579 | - |
| 0.0072 | 400 | 0.0001 | - |
| 0.0081 | 450 | 0.0897 | - |
| 0.0090 | 500 | 0.0618 | - |
| 0.0099 | 550 | 0.0002 | - |
| 0.0108 | 600 | 0.0001 | - |
| 0.0117 | 650 | 0.0004 | - |
| 0.0126 | 700 | 0.0002 | - |
| 0.0135 | 750 | 0.0002 | - |
| 0.0143 | 800 | 0.0001 | - |
| 0.0152 | 850 | 0.062 | - |
| 0.0161 | 900 | 0.0004 | - |
| 0.0170 | 950 | 0.0002 | - |
| 0.0179 | 1000 | 0.0001 | - |
| 0.0188 | 1050 | 0.0628 | - |
| 0.0197 | 1100 | 0.0003 | - |
| 0.0206 | 1150 | 0.0003 | - |
| 0.0215 | 1200 | 0.0001 | - |
| 0.0224 | 1250 | 0.0001 | - |
| 0.0233 | 1300 | 0.0001 | - |
| 0.0000 | 1 | 0.0002 | - |
| 0.0009 | 50 | 0.0002 | - |
| 0.0018 | 100 | 0.0001 | - |
| 0.0027 | 150 | 0.0001 | - |
| 0.0036 | 200 | 0.0001 | - |
| 0.0045 | 250 | 0.0002 | - |
| 0.0054 | 300 | 0.0001 | - |
| 0.0063 | 350 | 0.0002 | - |
| 0.0072 | 400 | 0.0002 | - |
| 0.0081 | 450 | 0.0262 | - |
| 0.0090 | 500 | 0.0001 | - |
| 0.0099 | 550 | 0.0002 | - |
| 0.0108 | 600 | 0.0001 | - |
| 0.0117 | 650 | 0.0001 | - |
| 0.0126 | 700 | 0.0001 | - |
| 0.0135 | 750 | 0.0001 | - |
| 0.0143 | 800 | 0.0001 | - |
| 0.0152 | 850 | 0.0002 | - |
| 0.0161 | 900 | 0.0001 | - |
| 0.0170 | 950 | 0.0001 | - |
| 0.0179 | 1000 | 0.0001 | - |
| 0.0188 | 1050 | 0.06 | - |
| 0.0197 | 1100 | 0.0001 | - |
| 0.0206 | 1150 | 0.0001 | - |
| 0.0215 | 1200 | 0.0001 | - |
| 0.0224 | 1250 | 0.0001 | - |
| 0.0233 | 1300 | 0.0001 | - |
| 0.0242 | 1350 | 0.0001 | - |
| 0.0251 | 1400 | 0.0001 | - |
| 0.0260 | 1450 | 0.0001 | - |
| 0.0269 | 1500 | 0.0002 | - |
| 0.0278 | 1550 | 0.0001 | - |
| 0.0287 | 1600 | 0.0001 | - |
| 0.0296 | 1650 | 0.0125 | - |
| 0.0305 | 1700 | 0.0001 | - |
| 0.0314 | 1750 | 0.0001 | - |
| 0.0323 | 1800 | 0.0001 | - |
| 0.0332 | 1850 | 0.0001 | - |
| 0.0341 | 1900 | 0.0001 | - |
| 0.0350 | 1950 | 0.0001 | - |
| 0.0359 | 2000 | 0.0001 | - |
| 0.0368 | 2050 | 0.0001 | - |
| 0.0377 | 2100 | 0.0002 | - |
| 0.0386 | 2150 | 0.0001 | - |
| 0.0395 | 2200 | 0.0001 | - |
| 0.0404 | 2250 | 0.0407 | - |
| 0.0412 | 2300 | 0.0001 | - |
| 0.0421 | 2350 | 0.0001 | - |
| 0.0430 | 2400 | 0.0001 | - |
| 0.0439 | 2450 | 0.0001 | - |
| 0.0448 | 2500 | 0.0001 | - |
| 0.0457 | 2550 | 0.0 | - |
| 0.0466 | 2600 | 0.0 | - |
| 0.0475 | 2650 | 0.0001 | - |
| 0.0484 | 2700 | 0.0 | - |
| 0.0493 | 2750 | 0.0001 | - |
| 0.0502 | 2800 | 0.0001 | - |
| 0.0511 | 2850 | 0.0001 | - |
| 0.0520 | 2900 | 0.0001 | - |
| 0.0529 | 2950 | 0.0002 | - |
| 0.0538 | 3000 | 0.0001 | - |
| 0.0547 | 3050 | 0.0001 | - |
| 0.0556 | 3100 | 0.0001 | - |
| 0.0565 | 3150 | 0.0001 | - |
| 0.0574 | 3200 | 0.0 | - |
| 0.0583 | 3250 | 0.0 | - |
| 0.0592 | 3300 | 0.0 | - |
| 0.0601 | 3350 | 0.0001 | - |
| 0.0610 | 3400 | 0.0 | - |
| 0.0619 | 3450 | 0.0 | - |
| 0.0628 | 3500 | 0.0001 | - |
| 0.0637 | 3550 | 0.0001 | - |
| 0.0646 | 3600 | 0.0 | - |
| 0.0655 | 3650 | 0.0001 | - |
| 0.0664 | 3700 | 0.0 | - |
| 0.0673 | 3750 | 0.0001 | - |
| 0.0681 | 3800 | 0.0 | - |
| 0.0690 | 3850 | 0.0005 | - |
| 0.0699 | 3900 | 0.0001 | - |
| 0.0708 | 3950 | 0.0001 | - |
| 0.0717 | 4000 | 0.0 | - |
| 0.0726 | 4050 | 0.0001 | - |
| 0.0735 | 4100 | 0.0009 | - |
| 0.0744 | 4150 | 0.0001 | - |
| 0.0753 | 4200 | 0.0001 | - |
| 0.0762 | 4250 | 0.0001 | - |
| 0.0771 | 4300 | 0.0 | - |
| 0.0780 | 4350 | 0.0001 | - |
| 0.0789 | 4400 | 0.0001 | - |
| 0.0798 | 4450 | 0.0001 | - |
| 0.0807 | 4500 | 0.0 | - |
| 0.0816 | 4550 | 0.0 | - |
| 0.0825 | 4600 | 0.0001 | - |
| 0.0834 | 4650 | 0.0 | - |
| 0.0843 | 4700 | 0.0 | - |
| 0.0852 | 4750 | 0.0 | - |
| 0.0861 | 4800 | 0.0 | - |
| 0.0870 | 4850 | 0.0 | - |
| 0.0879 | 4900 | 0.0004 | - |
| 0.0888 | 4950 | 0.0002 | - |
| 0.0897 | 5000 | 0.0001 | - |
| 0.0906 | 5050 | 0.0001 | - |
| 0.0915 | 5100 | 0.0 | - |
| 0.0924 | 5150 | 0.0026 | - |
| 0.0933 | 5200 | 0.0549 | - |
| 0.0942 | 5250 | 0.0001 | - |
| 0.0950 | 5300 | 0.0011 | - |
| 0.0959 | 5350 | 0.0 | - |
| 0.0968 | 5400 | 0.0 | - |
| 0.0977 | 5450 | 0.0 | - |
| 0.0986 | 5500 | 0.0002 | - |
| 0.0995 | 5550 | 0.0001 | - |
| 0.1004 | 5600 | 0.0 | - |
| 0.1013 | 5650 | 0.0001 | - |
| 0.1022 | 5700 | 0.0001 | - |
| 0.1031 | 5750 | 0.0 | - |
| 0.1040 | 5800 | 0.0 | - |
| 0.1049 | 5850 | 0.0 | - |
| 0.1058 | 5900 | 0.0203 | - |
| 0.1067 | 5950 | 0.0001 | - |
| 0.1076 | 6000 | 0.0 | - |
| 0.1085 | 6050 | 0.0 | - |
| 0.1094 | 6100 | 0.0 | - |
| 0.1103 | 6150 | 0.0 | - |
| 0.1112 | 6200 | 0.0001 | - |
| 0.1121 | 6250 | 0.0 | - |
| 0.1130 | 6300 | 0.0 | - |
| 0.1139 | 6350 | 0.0 | - |
| 0.1148 | 6400 | 0.0 | - |
| 0.1157 | 6450 | 0.0164 | - |
| 0.1166 | 6500 | 0.0001 | - |
| 0.1175 | 6550 | 0.0 | - |
| 0.1184 | 6600 | 0.0001 | - |
| 0.1193 | 6650 | 0.0002 | - |
| 0.1202 | 6700 | 0.0001 | - |
| 0.1211 | 6750 | 0.0 | - |
| 0.1219 | 6800 | 0.0 | - |
| 0.1228 | 6850 | 0.0 | - |
| 0.1237 | 6900 | 0.0 | - |
| 0.1246 | 6950 | 0.0 | - |
| 0.1255 | 7000 | 0.0001 | - |
| 0.1264 | 7050 | 0.0 | - |
| 0.1273 | 7100 | 0.0 | - |
| 0.1282 | 7150 | 0.0 | - |
| 0.1291 | 7200 | 0.0002 | - |
| 0.1300 | 7250 | 0.0 | - |
| 0.1309 | 7300 | 0.0 | - |
| 0.1318 | 7350 | 0.0 | - |
| 0.1327 | 7400 | 0.0 | - |
| 0.1336 | 7450 | 0.0 | - |
| 0.1345 | 7500 | 0.0002 | - |
| 0.1354 | 7550 | 0.0 | - |
| 0.1363 | 7600 | 0.0 | - |
| 0.1372 | 7650 | 0.0001 | - |
| 0.1381 | 7700 | 0.0001 | - |
| 0.1390 | 7750 | 0.0001 | - |
| 0.1399 | 7800 | 0.0001 | - |
| 0.1408 | 7850 | 0.0 | - |
| 0.1417 | 7900 | 0.0 | - |
| 0.1426 | 7950 | 0.0 | - |
| 0.1435 | 8000 | 0.0142 | - |
| 0.1444 | 8050 | 0.0001 | - |
| 0.1453 | 8100 | 0.0 | - |
| 0.1462 | 8150 | 0.0002 | - |
| 0.1471 | 8200 | 0.0 | - |
| 0.1480 | 8250 | 0.0 | - |
| 0.1488 | 8300 | 0.0 | - |
| 0.1497 | 8350 | 0.0 | - |
| 0.1506 | 8400 | 0.0003 | - |
| 0.1515 | 8450 | 0.0 | - |
| 0.1524 | 8500 | 0.0 | - |
| 0.1533 | 8550 | 0.0 | - |
| 0.1542 | 8600 | 0.0 | - |
| 0.1551 | 8650 | 0.0 | - |
| 0.1560 | 8700 | 0.0 | - |
| 0.1569 | 8750 | 0.0 | - |
| 0.1578 | 8800 | 0.0 | - |
| 0.1587 | 8850 | 0.0 | - |
| 0.1596 | 8900 | 0.0 | - |
| 0.1605 | 8950 | 0.0 | - |
| 0.1614 | 9000 | 0.0 | - |
| 0.1623 | 9050 | 0.0 | - |
| 0.1632 | 9100 | 0.0 | - |
| 0.1641 | 9150 | 0.0 | - |
| 0.1650 | 9200 | 0.0 | - |
| 0.1659 | 9250 | 0.0001 | - |
| 0.1668 | 9300 | 0.0 | - |
| 0.1677 | 9350 | 0.0 | - |
| 0.1686 | 9400 | 0.0 | - |
| 0.1695 | 9450 | 0.0 | - |
| 0.1704 | 9500 | 0.0 | - |
| 0.1713 | 9550 | 0.0 | - |
| 0.1722 | 9600 | 0.0 | - |
| 0.1731 | 9650 | 0.0 | - |
| 0.1740 | 9700 | 0.0 | - |
| 0.1749 | 9750 | 0.0 | - |
| 0.1758 | 9800 | 0.0 | - |
| 0.1766 | 9850 | 0.0 | - |
| 0.1775 | 9900 | 0.0 | - |
| 0.1784 | 9950 | 0.0 | - |
| 0.1793 | 10000 | 0.0 | - |
| 0.1802 | 10050 | 0.0097 | - |
| 0.1811 | 10100 | 0.0 | - |
| 0.1820 | 10150 | 0.0 | - |
| 0.1829 | 10200 | 0.0 | - |
| 0.1838 | 10250 | 0.0 | - |
| 0.1847 | 10300 | 0.0001 | - |
| 0.1856 | 10350 | 0.0 | - |
| 0.1865 | 10400 | 0.0 | - |
| 0.1874 | 10450 | 0.0 | - |
| 0.1883 | 10500 | 0.0 | - |
| 0.1892 | 10550 | 0.0 | - |
| 0.1901 | 10600 | 0.0 | - |
| 0.1910 | 10650 | 0.0 | - |
| 0.1919 | 10700 | 0.0 | - |
| 0.1928 | 10750 | 0.0 | - |
| 0.1937 | 10800 | 0.0 | - |
| 0.1946 | 10850 | 0.0 | - |
| 0.1955 | 10900 | 0.0 | - |
| 0.1964 | 10950 | 0.0 | - |
| 0.1973 | 11000 | 0.0001 | - |
| 0.1982 | 11050 | 0.0 | - |
| 0.1991 | 11100 | 0.0 | - |
| 0.2000 | 11150 | 0.0 | - |
| 0.2009 | 11200 | 0.0 | - |
| 0.2018 | 11250 | 0.0004 | - |
| 0.2027 | 11300 | 0.0001 | - |
| 0.2035 | 11350 | 0.0001 | - |
| 0.2044 | 11400 | 0.0 | - |
| 0.2053 | 11450 | 0.0001 | - |
| 0.2062 | 11500 | 0.0 | - |
| 0.2071 | 11550 | 0.0001 | - |
| 0.2080 | 11600 | 0.0 | - |
| 0.2089 | 11650 | 0.0 | - |
| 0.2098 | 11700 | 0.0 | - |
| 0.2107 | 11750 | 0.0 | - |
| 0.2116 | 11800 | 0.0 | - |
| 0.2125 | 11850 | 0.0 | - |
| 0.2134 | 11900 | 0.0 | - |
| 0.2143 | 11950 | 0.0001 | - |
| 0.2152 | 12000 | 0.0 | - |
| 0.2161 | 12050 | 0.0 | - |
| 0.2170 | 12100 | 0.0 | - |
| 0.2179 | 12150 | 0.0 | - |
| 0.2188 | 12200 | 0.0 | - |
| 0.2197 | 12250 | 0.0 | - |
| 0.2206 | 12300 | 0.0 | - |
| 0.2215 | 12350 | 0.0 | - |
| 0.2224 | 12400 | 0.0 | - |
| 0.2233 | 12450 | 0.0 | - |
| 0.2242 | 12500 | 0.0 | - |
| 0.2251 | 12550 | 0.0 | - |
| 0.2260 | 12600 | 0.0 | - |
| 0.2269 | 12650 | 0.0 | - |
| 0.2278 | 12700 | 0.0 | - |
| 0.2287 | 12750 | 0.0 | - |
| 0.2296 | 12800 | 0.0 | - |
| 0.2304 | 12850 | 0.0 | - |
| 0.2313 | 12900 | 0.0 | - |
| 0.2322 | 12950 | 0.0 | - |
| 0.2331 | 13000 | 0.0 | - |
| 0.2340 | 13050 | 0.0 | - |
| 0.2349 | 13100 | 0.0 | - |
| 0.2358 | 13150 | 0.0264 | - |
| 0.2367 | 13200 | 0.0 | - |
| 0.2376 | 13250 | 0.0 | - |
| 0.2385 | 13300 | 0.0 | - |
| 0.2394 | 13350 | 0.0 | - |
| 0.2403 | 13400 | 0.0 | - |
| 0.2412 | 13450 | 0.0 | - |
| 0.2421 | 13500 | 0.0 | - |
| 0.2430 | 13550 | 0.0 | - |
| 0.2439 | 13600 | 0.0 | - |
| 0.2448 | 13650 | 0.0 | - |
| 0.2457 | 13700 | 0.0 | - |
| 0.2466 | 13750 | 0.0 | - |
| 0.2475 | 13800 | 0.0 | - |
| 0.2484 | 13850 | 0.0 | - |
| 0.2493 | 13900 | 0.0 | - |
| 0.2502 | 13950 | 0.0 | - |
| 0.2511 | 14000 | 0.0 | - |
| 0.2520 | 14050 | 0.0 | - |
| 0.2529 | 14100 | 0.0 | - |
| 0.2538 | 14150 | 0.0001 | - |
| 0.2547 | 14200 | 0.0 | - |
| 0.2556 | 14250 | 0.0 | - |
| 0.2565 | 14300 | 0.0 | - |
| 0.2573 | 14350 | 0.0 | - |
| 0.2582 | 14400 | 0.0 | - |
| 0.2591 | 14450 | 0.0 | - |
| 0.2600 | 14500 | 0.0 | - |
| 0.2609 | 14550 | 0.0001 | - |
| 0.2618 | 14600 | 0.0 | - |
| 0.2627 | 14650 | 0.0 | - |
| 0.2636 | 14700 | 0.0 | - |
| 0.2645 | 14750 | 0.0 | - |
| 0.2654 | 14800 | 0.0 | - |
| 0.2663 | 14850 | 0.0 | - |
| 0.2672 | 14900 | 0.0 | - |
| 0.2681 | 14950 | 0.0001 | - |
| 0.2690 | 15000 | 0.0 | - |
| 0.2699 | 15050 | 0.0 | - |
| 0.2708 | 15100 | 0.0 | - |
| 0.2717 | 15150 | 0.0 | - |
| 0.2726 | 15200 | 0.0 | - |
| 0.2735 | 15250 | 0.0 | - |
| 0.2744 | 15300 | 0.0 | - |
| 0.2753 | 15350 | 0.0 | - |
| 0.2762 | 15400 | 0.0 | - |
| 0.2771 | 15450 | 0.0 | - |
| 0.2780 | 15500 | 0.0001 | - |
| 0.2789 | 15550 | 0.0621 | - |
| 0.2798 | 15600 | 0.0056 | - |
| 0.2807 | 15650 | 0.0 | - |
| 0.2816 | 15700 | 0.0 | - |
| 0.2825 | 15750 | 0.0145 | - |
| 0.2834 | 15800 | 0.0 | - |
| 0.2842 | 15850 | 0.0 | - |
| 0.2851 | 15900 | 0.0166 | - |
| 0.2860 | 15950 | 0.0 | - |
| 0.2869 | 16000 | 0.0 | - |
| 0.2878 | 16050 | 0.0 | - |
| 0.2887 | 16100 | 0.0166 | - |
| 0.2896 | 16150 | 0.0 | - |
| 0.2905 | 16200 | 0.0 | - |
| 0.2914 | 16250 | 0.0169 | - |
| 0.2923 | 16300 | 0.0 | - |
| 0.2932 | 16350 | 0.0 | - |
| 0.2941 | 16400 | 0.0 | - |
| 0.2950 | 16450 | 0.0 | - |
| 0.2959 | 16500 | 0.0 | - |
| 0.2968 | 16550 | 0.0 | - |
| 0.2977 | 16600 | 0.0 | - |
| 0.2986 | 16650 | 0.0 | - |
| 0.2995 | 16700 | 0.0 | - |
| 0.3004 | 16750 | 0.0 | - |
| 0.3013 | 16800 | 0.0 | - |
| 0.3022 | 16850 | 0.0 | - |
| 0.3031 | 16900 | 0.0 | - |
| 0.3040 | 16950 | 0.0 | - |
| 0.3049 | 17000 | 0.0 | - |
| 0.3058 | 17050 | 0.0138 | - |
| 0.3067 | 17100 | 0.0 | - |
| 0.3076 | 17150 | 0.0 | - |
| 0.3085 | 17200 | 0.0 | - |
| 0.3094 | 17250 | 0.0 | - |
| 0.3103 | 17300 | 0.0 | - |
| 0.3111 | 17350 | 0.0 | - |
| 0.3120 | 17400 | 0.0 | - |
| 0.3129 | 17450 | 0.0001 | - |
| 0.3138 | 17500 | 0.0 | - |
| 0.3147 | 17550 | 0.0 | - |
| 0.3156 | 17600 | 0.0 | - |
| 0.3165 | 17650 | 0.0 | - |
| 0.3174 | 17700 | 0.0 | - |
| 0.3183 | 17750 | 0.0 | - |
| 0.3192 | 17800 | 0.0 | - |
| 0.3201 | 17850 | 0.0 | - |
| 0.3210 | 17900 | 0.0 | - |
| 0.3219 | 17950 | 0.0001 | - |
| 0.3228 | 18000 | 0.0 | - |
| 0.3237 | 18050 | 0.0 | - |
| 0.3246 | 18100 | 0.0 | - |
| 0.3255 | 18150 | 0.0 | - |
| 0.3264 | 18200 | 0.0 | - |
| 0.3273 | 18250 | 0.0 | - |
| 0.3282 | 18300 | 0.0 | - |
| 0.3291 | 18350 | 0.0 | - |
| 0.3300 | 18400 | 0.0 | - |
| 0.3309 | 18450 | 0.0003 | - |
| 0.3318 | 18500 | 0.0 | - |
| 0.3327 | 18550 | 0.0 | - |
| 0.3336 | 18600 | 0.0 | - |
| 0.3345 | 18650 | 0.0 | - |
| 0.3354 | 18700 | 0.0 | - |
| 0.3363 | 18750 | 0.0 | - |
| 0.3372 | 18800 | 0.0 | - |
| 0.3380 | 18850 | 0.0 | - |
| 0.3389 | 18900 | 0.0 | - |
| 0.3398 | 18950 | 0.0 | - |
| 0.3407 | 19000 | 0.0 | - |
| 0.3416 | 19050 | 0.0 | - |
| 0.3425 | 19100 | 0.0 | - |
| 0.3434 | 19150 | 0.0 | - |
| 0.3443 | 19200 | 0.0 | - |
| 0.3452 | 19250 | 0.0 | - |
| 0.3461 | 19300 | 0.0 | - |
| 0.3470 | 19350 | 0.0 | - |
| 0.3479 | 19400 | 0.0 | - |
| 0.3488 | 19450 | 0.0 | - |
| 0.3497 | 19500 | 0.0001 | - |
| 0.3506 | 19550 | 0.0131 | - |
| 0.3515 | 19600 | 0.0 | - |
| 0.3524 | 19650 | 0.0 | - |
| 0.3533 | 19700 | 0.0 | - |
| 0.3542 | 19750 | 0.0 | - |
| 0.3551 | 19800 | 0.0 | - |
| 0.3560 | 19850 | 0.0 | - |
| 0.3569 | 19900 | 0.0 | - |
| 0.3578 | 19950 | 0.0 | - |
| 0.3587 | 20000 | 0.0 | - |
| 0.3596 | 20050 | 0.0 | - |
| 0.3605 | 20100 | 0.0 | - |
| 0.3614 | 20150 | 0.0 | - |
| 0.3623 | 20200 | 0.0208 | - |
| 0.3632 | 20250 | 0.0 | - |
| 0.3641 | 20300 | 0.0 | - |
| 0.3650 | 20350 | 0.0 | - |
| 0.3658 | 20400 | 0.0 | - |
| 0.3667 | 20450 | 0.0 | - |
| 0.3676 | 20500 | 0.0 | - |
| 0.3685 | 20550 | 0.0 | - |
| 0.3694 | 20600 | 0.0 | - |
| 0.3703 | 20650 | 0.0 | - |
| 0.3712 | 20700 | 0.0 | - |
| 0.3721 | 20750 | 0.0 | - |
| 0.3730 | 20800 | 0.0 | - |
| 0.3739 | 20850 | 0.0 | - |
| 0.3748 | 20900 | 0.0 | - |
| 0.3757 | 20950 | 0.0 | - |
| 0.3766 | 21000 | 0.0 | - |
| 0.3775 | 21050 | 0.0 | - |
| 0.3784 | 21100 | 0.0 | - |
| 0.3793 | 21150 | 0.0 | - |
| 0.3802 | 21200 | 0.0 | - |
| 0.3811 | 21250 | 0.0 | - |
| 0.3820 | 21300 | 0.0 | - |
| 0.3829 | 21350 | 0.0 | - |
| 0.3838 | 21400 | 0.0 | - |
| 0.3847 | 21450 | 0.0 | - |
| 0.3856 | 21500 | 0.0 | - |
| 0.3865 | 21550 | 0.0 | - |
| 0.3874 | 21600 | 0.0 | - |
| 0.3883 | 21650 | 0.0 | - |
| 0.3892 | 21700 | 0.0 | - |
| 0.3901 | 21750 | 0.0 | - |
| 0.3910 | 21800 | 0.0 | - |
| 0.3919 | 21850 | 0.0001 | - |
| 0.3927 | 21900 | 0.0 | - |
| 0.3936 | 21950 | 0.0 | - |
| 0.3945 | 22000 | 0.0 | - |
| 0.3954 | 22050 | 0.0 | - |
| 0.3963 | 22100 | 0.0 | - |
| 0.3972 | 22150 | 0.0 | - |
| 0.3981 | 22200 | 0.0 | - |
| 0.3990 | 22250 | 0.0 | - |
| 0.3999 | 22300 | 0.0 | - |
| 0.4008 | 22350 | 0.0 | - |
| 0.4017 | 22400 | 0.0 | - |
| 0.4026 | 22450 | 0.0 | - |
| 0.4035 | 22500 | 0.0 | - |
| 0.4044 | 22550 | 0.0 | - |
| 0.4053 | 22600 | 0.0217 | - |
| 0.4062 | 22650 | 0.0 | - |
| 0.4071 | 22700 | 0.0 | - |
| 0.4080 | 22750 | 0.0 | - |
| 0.4089 | 22800 | 0.0 | - |
| 0.4098 | 22850 | 0.0 | - |
| 0.4107 | 22900 | 0.0 | - |
| 0.4116 | 22950 | 0.0 | - |
| 0.4125 | 23000 | 0.0 | - |
| 0.4134 | 23050 | 0.0 | - |
| 0.4143 | 23100 | 0.0 | - |
| 0.4152 | 23150 | 0.0 | - |
| 0.4161 | 23200 | 0.0 | - |
| 0.4170 | 23250 | 0.0 | - |
| 0.4179 | 23300 | 0.0 | - |
| 0.4188 | 23350 | 0.0 | - |
| 0.4196 | 23400 | 0.0 | - |
| 0.4205 | 23450 | 0.0 | - |
| 0.4214 | 23500 | 0.0 | - |
| 0.4223 | 23550 | 0.0 | - |
| 0.4232 | 23600 | 0.0 | - |
| 0.4241 | 23650 | 0.0 | - |
| 0.4250 | 23700 | 0.0 | - |
| 0.4259 | 23750 | 0.0 | - |
| 0.4268 | 23800 | 0.0 | - |
| 0.4277 | 23850 | 0.0 | - |
| 0.4286 | 23900 | 0.0098 | - |
| 0.4295 | 23950 | 0.0 | - |
| 0.4304 | 24000 | 0.0 | - |
| 0.4313 | 24050 | 0.0 | - |
| 0.4322 | 24100 | 0.0 | - |
| 0.4331 | 24150 | 0.0 | - |
| 0.4340 | 24200 | 0.0 | - |
| 0.4349 | 24250 | 0.0 | - |
| 0.4358 | 24300 | 0.0089 | - |
| 0.4367 | 24350 | 0.0 | - |
| 0.4376 | 24400 | 0.0 | - |
| 0.4385 | 24450 | 0.0 | - |
| 0.4394 | 24500 | 0.0 | - |
| 0.4403 | 24550 | 0.0 | - |
| 0.4412 | 24600 | 0.0092 | - |
| 0.4421 | 24650 | 0.0003 | - |
| 0.4430 | 24700 | 0.0283 | - |
| 0.4439 | 24750 | 0.0 | - |
| 0.4448 | 24800 | 0.0 | - |
| 0.4457 | 24850 | 0.0 | - |
| 0.4465 | 24900 | 0.0 | - |
| 0.4474 | 24950 | 0.0 | - |
| 0.4483 | 25000 | 0.0 | - |
| 0.4492 | 25050 | 0.0 | - |
| 0.4501 | 25100 | 0.0 | - |
| 0.4510 | 25150 | 0.0002 | - |
| 0.4519 | 25200 | 0.0016 | - |
| 0.4528 | 25250 | 0.0 | - |
| 0.4537 | 25300 | 0.0 | - |
| 0.4546 | 25350 | 0.0 | - |
| 0.4555 | 25400 | 0.0 | - |
| 0.4564 | 25450 | 0.0 | - |
| 0.4573 | 25500 | 0.0 | - |
| 0.4582 | 25550 | 0.0 | - |
| 0.4591 | 25600 | 0.0 | - |
| 0.4600 | 25650 | 0.0171 | - |
| 0.4609 | 25700 | 0.0 | - |
| 0.4618 | 25750 | 0.0 | - |
| 0.4627 | 25800 | 0.0161 | - |
| 0.4636 | 25850 | 0.0 | - |
| 0.4645 | 25900 | 0.0 | - |
| 0.4654 | 25950 | 0.0 | - |
| 0.4663 | 26000 | 0.0 | - |
| 0.4672 | 26050 | 0.0078 | - |
| 0.4681 | 26100 | 0.0 | - |
| 0.4690 | 26150 | 0.0 | - |
| 0.4699 | 26200 | 0.0 | - |
| 0.4708 | 26250 | 0.0 | - |
| 0.4717 | 26300 | 0.0 | - |
| 0.4726 | 26350 | 0.0 | - |
| 0.4734 | 26400 | 0.0 | - |
| 0.4743 | 26450 | 0.0 | - |
| 0.4752 | 26500 | 0.0091 | - |
| 0.4761 | 26550 | 0.0 | - |
| 0.4770 | 26600 | 0.0 | - |
| 0.4779 | 26650 | 0.0 | - |
| 0.4788 | 26700 | 0.0 | - |
| 0.4797 | 26750 | 0.0 | - |
| 0.4806 | 26800 | 0.0 | - |
| 0.4815 | 26850 | 0.0 | - |
| 0.4824 | 26900 | 0.0 | - |
| 0.4833 | 26950 | 0.0 | - |
| 0.4842 | 27000 | 0.0 | - |
| 0.4851 | 27050 | 0.0 | - |
| 0.4860 | 27100 | 0.0 | - |
| 0.4869 | 27150 | 0.0 | - |
| 0.4878 | 27200 | 0.0 | - |
| 0.4887 | 27250 | 0.0 | - |
| 0.4896 | 27300 | 0.0 | - |
| 0.4905 | 27350 | 0.0 | - |
| 0.4914 | 27400 | 0.0 | - |
| 0.4923 | 27450 | 0.0 | - |
| 0.4932 | 27500 | 0.0 | - |
| 0.4941 | 27550 | 0.0 | - |
| 0.4950 | 27600 | 0.0 | - |
| 0.4959 | 27650 | 0.0 | - |
| 0.4968 | 27700 | 0.0 | - |
| 0.4977 | 27750 | 0.0 | - |
| 0.4986 | 27800 | 0.0 | - |
| 0.4995 | 27850 | 0.0 | - |
| 0.5003 | 27900 | 0.0273 | - |
| 0.5012 | 27950 | 0.0 | - |
| 0.5021 | 28000 | 0.0 | - |
| 0.5030 | 28050 | 0.0 | - |
| 0.5039 | 28100 | 0.0 | - |
| 0.5048 | 28150 | 0.0 | - |
| 0.5057 | 28200 | 0.0 | - |
| 0.5066 | 28250 | 0.0 | - |
| 0.5075 | 28300 | 0.0 | - |
| 0.5084 | 28350 | 0.0 | - |
| 0.5093 | 28400 | 0.0 | - |
| 0.5102 | 28450 | 0.0 | - |
| 0.5111 | 28500 | 0.0 | - |
| 0.5120 | 28550 | 0.0 | - |
| 0.5129 | 28600 | 0.0 | - |
| 0.5138 | 28650 | 0.0 | - |
| 0.5147 | 28700 | 0.0 | - |
| 0.5156 | 28750 | 0.0 | - |
| 0.5165 | 28800 | 0.0 | - |
| 0.5174 | 28850 | 0.0 | - |
| 0.5183 | 28900 | 0.0 | - |
| 0.5192 | 28950 | 0.017 | - |
| 0.5201 | 29000 | 0.0 | - |
| 0.5210 | 29050 | 0.0 | - |
| 0.5219 | 29100 | 0.0 | - |
| 0.5228 | 29150 | 0.0 | - |
| 0.5237 | 29200 | 0.0 | - |
| 0.5246 | 29250 | 0.0 | - |
| 0.5255 | 29300 | 0.0 | - |
| 0.5264 | 29350 | 0.0 | - |
| 0.5273 | 29400 | 0.0 | - |
| 0.5281 | 29450 | 0.0 | - |
| 0.5290 | 29500 | 0.0211 | - |
| 0.5299 | 29550 | 0.0 | - |
| 0.5308 | 29600 | 0.0 | - |
| 0.5317 | 29650 | 0.0 | - |
| 0.5326 | 29700 | 0.0 | - |
| 0.5335 | 29750 | 0.0 | - |
| 0.5344 | 29800 | 0.0 | - |
| 0.5353 | 29850 | 0.0 | - |
| 0.5362 | 29900 | 0.0 | - |
| 0.5371 | 29950 | 0.0 | - |
| 0.5380 | 30000 | 0.0 | - |
| 0.5389 | 30050 | 0.0002 | - |
| 0.5398 | 30100 | 0.0 | - |
| 0.5407 | 30150 | 0.0 | - |
| 0.5416 | 30200 | 0.0 | - |
| 0.5425 | 30250 | 0.0 | - |
| 0.5434 | 30300 | 0.0 | - |
| 0.5443 | 30350 | 0.0 | - |
| 0.5452 | 30400 | 0.0 | - |
| 0.5461 | 30450 | 0.0 | - |
| 0.5470 | 30500 | 0.0158 | - |
| 0.5479 | 30550 | 0.0 | - |
| 0.5488 | 30600 | 0.0 | - |
| 0.5497 | 30650 | 0.0 | - |
| 0.5506 | 30700 | 0.0 | - |
| 0.5515 | 30750 | 0.0165 | - |
| 0.5524 | 30800 | 0.0 | - |
| 0.5533 | 30850 | 0.0 | - |
| 0.5542 | 30900 | 0.0 | - |
| 0.5550 | 30950 | 0.0 | - |
| 0.5559 | 31000 | 0.0 | - |
| 0.5568 | 31050 | 0.0 | - |
| 0.5577 | 31100 | 0.0 | - |
| 0.5586 | 31150 | 0.0132 | - |
| 0.5595 | 31200 | 0.0 | - |
| 0.5604 | 31250 | 0.0 | - |
| 0.5613 | 31300 | 0.0 | - |
| 0.5622 | 31350 | 0.0 | - |
| 0.5631 | 31400 | 0.0 | - |
| 0.5640 | 31450 | 0.0 | - |
| 0.5649 | 31500 | 0.0 | - |
| 0.5658 | 31550 | 0.0 | - |
| 0.5667 | 31600 | 0.0 | - |
| 0.5676 | 31650 | 0.0 | - |
| 0.5685 | 31700 | 0.0 | - |
| 0.5694 | 31750 | 0.0 | - |
| 0.5703 | 31800 | 0.0 | - |
| 0.5712 | 31850 | 0.0 | - |
| 0.5721 | 31900 | 0.0 | - |
| 0.5730 | 31950 | 0.0185 | - |
| 0.5739 | 32000 | 0.0 | - |
| 0.5748 | 32050 | 0.0 | - |
| 0.5757 | 32100 | 0.0 | - |
| 0.5766 | 32150 | 0.0 | - |
| 0.5775 | 32200 | 0.0 | - |
| 0.5784 | 32250 | 0.0 | - |
| 0.5793 | 32300 | 0.0 | - |
| 0.5802 | 32350 | 0.0 | - |
| 0.5811 | 32400 | 0.0 | - |
| 0.5819 | 32450 | 0.0 | - |
| 0.5828 | 32500 | 0.0 | - |
| 0.5837 | 32550 | 0.0 | - |
| 0.5846 | 32600 | 0.0 | - |
| 0.5855 | 32650 | 0.0 | - |
| 0.5864 | 32700 | 0.0 | - |
| 0.5873 | 32750 | 0.0 | - |
| 0.5882 | 32800 | 0.0 | - |
| 0.5891 | 32850 | 0.0 | - |
| 0.5900 | 32900 | 0.0 | - |
| 0.5909 | 32950 | 0.0 | - |
| 0.5918 | 33000 | 0.0 | - |
| 0.5927 | 33050 | 0.0 | - |
| 0.5936 | 33100 | 0.0 | - |
| 0.5945 | 33150 | 0.0 | - |
| 0.5954 | 33200 | 0.0 | - |
| 0.5963 | 33250 | 0.0 | - |
| 0.5972 | 33300 | 0.0 | - |
| 0.5981 | 33350 | 0.0 | - |
| 0.5990 | 33400 | 0.0 | - |
| 0.5999 | 33450 | 0.0 | - |
| 0.6008 | 33500 | 0.0 | - |
| 0.6017 | 33550 | 0.0 | - |
| 0.6026 | 33600 | 0.0 | - |
| 0.6035 | 33650 | 0.0 | - |
| 0.6044 | 33700 | 0.0 | - |
| 0.6053 | 33750 | 0.0 | - |
| 0.6062 | 33800 | 0.0 | - |
| 0.6071 | 33850 | 0.0 | - |
| 0.6080 | 33900 | 0.0 | - |
| 0.6088 | 33950 | 0.0 | - |
| 0.6097 | 34000 | 0.0 | - |
| 0.6106 | 34050 | 0.0 | - |
| 0.6115 | 34100 | 0.0 | - |
| 0.6124 | 34150 | 0.0 | - |
| 0.6133 | 34200 | 0.0 | - |
| 0.6142 | 34250 | 0.0 | - |
| 0.6151 | 34300 | 0.0 | - |
| 0.6160 | 34350 | 0.0 | - |
| 0.6169 | 34400 | 0.0 | - |
| 0.6178 | 34450 | 0.0 | - |
| 0.6187 | 34500 | 0.0 | - |
| 0.6196 | 34550 | 0.0 | - |
| 0.6205 | 34600 | 0.0 | - |
| 0.6214 | 34650 | 0.0 | - |
| 0.6223 | 34700 | 0.0 | - |
| 0.6232 | 34750 | 0.0 | - |
| 0.6241 | 34800 | 0.0 | - |
| 0.6250 | 34850 | 0.0 | - |
| 0.6259 | 34900 | 0.0174 | - |
| 0.6268 | 34950 | 0.0 | - |
| 0.6277 | 35000 | 0.0 | - |
| 0.6286 | 35050 | 0.0 | - |
| 0.6295 | 35100 | 0.0173 | - |
| 0.6304 | 35150 | 0.0 | - |
| 0.6313 | 35200 | 0.0 | - |
| 0.6322 | 35250 | 0.0 | - |
| 0.6331 | 35300 | 0.0 | - |
| 0.6340 | 35350 | 0.0 | - |
| 0.6349 | 35400 | 0.0 | - |
| 0.6357 | 35450 | 0.0 | - |
| 0.6366 | 35500 | 0.0 | - |
| 0.6375 | 35550 | 0.0 | - |
| 0.6384 | 35600 | 0.0 | - |
| 0.6393 | 35650 | 0.0 | - |
| 0.6402 | 35700 | 0.0 | - |
| 0.6411 | 35750 | 0.0 | - |
| 0.6420 | 35800 | 0.0 | - |
| 0.6429 | 35850 | 0.0 | - |
| 0.6438 | 35900 | 0.0 | - |
| 0.6447 | 35950 | 0.0 | - |
| 0.6456 | 36000 | 0.0 | - |
| 0.6465 | 36050 | 0.0 | - |
| 0.6474 | 36100 | 0.0 | - |
| 0.6483 | 36150 | 0.0 | - |
| 0.6492 | 36200 | 0.0 | - |
| 0.6501 | 36250 | 0.0 | - |
| 0.6510 | 36300 | 0.0115 | - |
| 0.6519 | 36350 | 0.0 | - |
| 0.6528 | 36400 | 0.0 | - |
| 0.6537 | 36450 | 0.0 | - |
| 0.6546 | 36500 | 0.0 | - |
| 0.6555 | 36550 | 0.0 | - |
| 0.6564 | 36600 | 0.0204 | - |
| 0.6573 | 36650 | 0.0 | - |
| 0.6582 | 36700 | 0.0125 | - |
| 0.6591 | 36750 | 0.0 | - |
| 0.6600 | 36800 | 0.0 | - |
| 0.6609 | 36850 | 0.0 | - |
| 0.6618 | 36900 | 0.0 | - |
| 0.6626 | 36950 | 0.0 | - |
| 0.6635 | 37000 | 0.0 | - |
| 0.6644 | 37050 | 0.0 | - |
| 0.6653 | 37100 | 0.0109 | - |
| 0.6662 | 37150 | 0.0 | - |
| 0.6671 | 37200 | 0.0 | - |
| 0.6680 | 37250 | 0.0 | - |
| 0.6689 | 37300 | 0.0 | - |
| 0.6698 | 37350 | 0.0 | - |
| 0.6707 | 37400 | 0.0 | - |
| 0.6716 | 37450 | 0.0 | - |
| 0.6725 | 37500 | 0.0 | - |
| 0.6734 | 37550 | 0.0 | - |
| 0.6743 | 37600 | 0.0 | - |
| 0.6752 | 37650 | 0.0 | - |
| 0.6761 | 37700 | 0.0 | - |
| 0.6770 | 37750 | 0.0 | - |
| 0.6779 | 37800 | 0.0 | - |
| 0.6788 | 37850 | 0.0 | - |
| 0.6797 | 37900 | 0.0 | - |
| 0.6806 | 37950 | 0.0 | - |
| 0.6815 | 38000 | 0.0 | - |
| 0.6824 | 38050 | 0.0 | - |
| 0.6833 | 38100 | 0.0 | - |
| 0.6842 | 38150 | 0.0 | - |
| 0.6851 | 38200 | 0.0 | - |
| 0.6860 | 38250 | 0.0 | - |
| 0.6869 | 38300 | 0.0 | - |
| 0.6878 | 38350 | 0.0 | - |
| 0.6887 | 38400 | 0.0 | - |
| 0.6896 | 38450 | 0.0 | - |
| 0.6904 | 38500 | 0.0 | - |
| 0.6913 | 38550 | 0.0 | - |
| 0.6922 | 38600 | 0.0 | - |
| 0.6931 | 38650 | 0.0 | - |
| 0.6940 | 38700 | 0.0 | - |
| 0.6949 | 38750 | 0.0 | - |
| 0.6958 | 38800 | 0.0 | - |
| 0.6967 | 38850 | 0.0 | - |
| 0.6976 | 38900 | 0.0 | - |
| 0.6985 | 38950 | 0.0 | - |
| 0.6994 | 39000 | 0.0 | - |
| 0.7003 | 39050 | 0.0 | - |
| 0.7012 | 39100 | 0.0 | - |
| 0.7021 | 39150 | 0.0 | - |
| 0.7030 | 39200 | 0.0 | - |
| 0.7039 | 39250 | 0.0 | - |
| 0.7048 | 39300 | 0.0 | - |
| 0.7057 | 39350 | 0.0 | - |
| 0.7066 | 39400 | 0.0 | - |
| 0.7075 | 39450 | 0.0 | - |
| 0.7084 | 39500 | 0.0 | - |
| 0.7093 | 39550 | 0.0 | - |
| 0.7102 | 39600 | 0.0 | - |
| 0.7111 | 39650 | 0.0 | - |
| 0.7120 | 39700 | 0.0 | - |
| 0.7129 | 39750 | 0.0 | - |
| 0.7138 | 39800 | 0.0 | - |
| 0.7147 | 39850 | 0.0 | - |
| 0.7156 | 39900 | 0.0 | - |
| 0.7165 | 39950 | 0.0 | - |
| 0.7173 | 40000 | 0.0 | - |
| 0.7182 | 40050 | 0.0 | - |
| 0.7191 | 40100 | 0.0 | - |
| 0.7200 | 40150 | 0.0 | - |
| 0.7209 | 40200 | 0.0 | - |
| 0.7218 | 40250 | 0.0 | - |
| 0.7227 | 40300 | 0.0 | - |
| 0.7236 | 40350 | 0.0 | - |
| 0.7245 | 40400 | 0.0 | - |
| 0.7254 | 40450 | 0.0 | - |
| 0.7263 | 40500 | 0.0 | - |
| 0.7272 | 40550 | 0.0 | - |
| 0.7281 | 40600 | 0.0 | - |
| 0.7290 | 40650 | 0.0 | - |
| 0.7299 | 40700 | 0.0 | - |
| 0.7308 | 40750 | 0.0 | - |
| 0.7317 | 40800 | 0.0 | - |
| 0.7326 | 40850 | 0.0 | - |
| 0.7335 | 40900 | 0.0 | - |
| 0.7344 | 40950 | 0.0 | - |
| 0.7353 | 41000 | 0.0 | - |
| 0.7362 | 41050 | 0.0 | - |
| 0.7371 | 41100 | 0.0 | - |
| 0.7380 | 41150 | 0.0153 | - |
| 0.7389 | 41200 | 0.0 | - |
| 0.7398 | 41250 | 0.0 | - |
| 0.7407 | 41300 | 0.0 | - |
| 0.7416 | 41350 | 0.0 | - |
| 0.7425 | 41400 | 0.0 | - |
| 0.7434 | 41450 | 0.0 | - |
| 0.7442 | 41500 | 0.0 | - |
| 0.7451 | 41550 | 0.0 | - |
| 0.7460 | 41600 | 0.0 | - |
| 0.7469 | 41650 | 0.0 | - |
| 0.7478 | 41700 | 0.0 | - |
| 0.7487 | 41750 | 0.0001 | - |
| 0.7496 | 41800 | 0.0 | - |
| 0.7505 | 41850 | 0.0 | - |
| 0.7514 | 41900 | 0.0 | - |
| 0.7523 | 41950 | 0.0 | - |
| 0.7532 | 42000 | 0.0 | - |
| 0.7541 | 42050 | 0.0 | - |
| 0.7550 | 42100 | 0.0155 | - |
| 0.7559 | 42150 | 0.0231 | - |
| 0.7568 | 42200 | 0.0 | - |
| 0.7577 | 42250 | 0.0 | - |
| 0.7586 | 42300 | 0.0 | - |
| 0.7595 | 42350 | 0.0172 | - |
| 0.7604 | 42400 | 0.0169 | - |
| 0.7613 | 42450 | 0.0 | - |
| 0.7622 | 42500 | 0.0 | - |
| 0.7631 | 42550 | 0.0157 | - |
| 0.7640 | 42600 | 0.0 | - |
| 0.7649 | 42650 | 0.0 | - |
| 0.7658 | 42700 | 0.0 | - |
| 0.7667 | 42750 | 0.0 | - |
| 0.7676 | 42800 | 0.0 | - |
| 0.7685 | 42850 | 0.0 | - |
| 0.7694 | 42900 | 0.0 | - |
| 0.7703 | 42950 | 0.0208 | - |
| 0.7711 | 43000 | 0.0 | - |
| 0.7720 | 43050 | 0.0 | - |
| 0.7729 | 43100 | 0.0 | - |
| 0.7738 | 43150 | 0.0 | - |
| 0.7747 | 43200 | 0.0 | - |
| 0.7756 | 43250 | 0.0 | - |
| 0.7765 | 43300 | 0.0 | - |
| 0.7774 | 43350 | 0.0 | - |
| 0.7783 | 43400 | 0.0 | - |
| 0.7792 | 43450 | 0.0 | - |
| 0.7801 | 43500 | 0.0 | - |
| 0.7810 | 43550 | 0.0 | - |
| 0.7819 | 43600 | 0.0 | - |
| 0.7828 | 43650 | 0.0 | - |
| 0.7837 | 43700 | 0.0 | - |
| 0.7846 | 43750 | 0.0 | - |
| 0.7855 | 43800 | 0.0 | - |
| 0.7864 | 43850 | 0.0 | - |
| 0.7873 | 43900 | 0.0 | - |
| 0.7882 | 43950 | 0.0 | - |
| 0.7891 | 44000 | 0.0 | - |
| 0.7900 | 44050 | 0.0 | - |
| 0.7909 | 44100 | 0.0 | - |
| 0.7918 | 44150 | 0.0 | - |
| 0.7927 | 44200 | 0.0 | - |
| 0.7936 | 44250 | 0.0 | - |
| 0.7945 | 44300 | 0.0 | - |
| 0.7954 | 44350 | 0.0 | - |
| 0.7963 | 44400 | 0.0 | - |
| 0.7972 | 44450 | 0.0 | - |
| 0.7980 | 44500 | 0.0 | - |
| 0.7989 | 44550 | 0.0 | - |
| 0.7998 | 44600 | 0.0 | - |
| 0.8007 | 44650 | 0.0 | - |
| 0.8016 | 44700 | 0.0 | - |
| 0.8025 | 44750 | 0.0 | - |
| 0.8034 | 44800 | 0.0 | - |
| 0.8043 | 44850 | 0.0 | - |
| 0.8052 | 44900 | 0.0 | - |
| 0.8061 | 44950 | 0.0108 | - |
| 0.8070 | 45000 | 0.0 | - |
| 0.8079 | 45050 | 0.0 | - |
| 0.8088 | 45100 | 0.0 | - |
| 0.8097 | 45150 | 0.0 | - |
| 0.8106 | 45200 | 0.0 | - |
| 0.8115 | 45250 | 0.0 | - |
| 0.8124 | 45300 | 0.0 | - |
| 0.8133 | 45350 | 0.0 | - |
| 0.8142 | 45400 | 0.0 | - |
| 0.8151 | 45450 | 0.0 | - |
| 0.8160 | 45500 | 0.0 | - |
| 0.8169 | 45550 | 0.0 | - |
| 0.8178 | 45600 | 0.0 | - |
| 0.8187 | 45650 | 0.0 | - |
| 0.8196 | 45700 | 0.0 | - |
| 0.8205 | 45750 | 0.0 | - |
| 0.8214 | 45800 | 0.0 | - |
| 0.8223 | 45850 | 0.0 | - |
| 0.8232 | 45900 | 0.0 | - |
| 0.8241 | 45950 | 0.0 | - |
| 0.8249 | 46000 | 0.0 | - |
| 0.8258 | 46050 | 0.0 | - |
| 0.8267 | 46100 | 0.0211 | - |
| 0.8276 | 46150 | 0.0 | - |
| 0.8285 | 46200 | 0.0 | - |
| 0.8294 | 46250 | 0.0 | - |
| 0.8303 | 46300 | 0.0 | - |
| 0.8312 | 46350 | 0.0 | - |
| 0.8321 | 46400 | 0.0 | - |
| 0.8330 | 46450 | 0.0 | - |
| 0.8339 | 46500 | 0.0 | - |
| 0.8348 | 46550 | 0.0 | - |
| 0.8357 | 46600 | 0.0 | - |
| 0.8366 | 46650 | 0.0114 | - |
| 0.8375 | 46700 | 0.0 | - |
| 0.8384 | 46750 | 0.0 | - |
| 0.8393 | 46800 | 0.0 | - |
| 0.8402 | 46850 | 0.0 | - |
| 0.8411 | 46900 | 0.0 | - |
| 0.8420 | 46950 | 0.0 | - |
| 0.8429 | 47000 | 0.0 | - |
| 0.8438 | 47050 | 0.0 | - |
| 0.8447 | 47100 | 0.0 | - |
| 0.8456 | 47150 | 0.0 | - |
| 0.8465 | 47200 | 0.0 | - |
| 0.8474 | 47250 | 0.0 | - |
| 0.8483 | 47300 | 0.0 | - |
| 0.8492 | 47350 | 0.0 | - |
| 0.8501 | 47400 | 0.0 | - |
| 0.8510 | 47450 | 0.0 | - |
| 0.8518 | 47500 | 0.0 | - |
| 0.8527 | 47550 | 0.0 | - |
| 0.8536 | 47600 | 0.0 | - |
| 0.8545 | 47650 | 0.0 | - |
| 0.8554 | 47700 | 0.0 | - |
| 0.8563 | 47750 | 0.0 | - |
| 0.8572 | 47800 | 0.0 | - |
| 0.8581 | 47850 | 0.0 | - |
| 0.8590 | 47900 | 0.0 | - |
| 0.8599 | 47950 | 0.0 | - |
| 0.8608 | 48000 | 0.0178 | - |
| 0.8617 | 48050 | 0.0 | - |
| 0.8626 | 48100 | 0.0 | - |
| 0.8635 | 48150 | 0.0 | - |
| 0.8644 | 48200 | 0.0 | - |
| 0.8653 | 48250 | 0.0 | - |
| 0.8662 | 48300 | 0.0 | - |
| 0.8671 | 48350 | 0.0 | - |
| 0.8680 | 48400 | 0.0146 | - |
| 0.8689 | 48450 | 0.0 | - |
| 0.8698 | 48500 | 0.0 | - |
| 0.8707 | 48550 | 0.0 | - |
| 0.8716 | 48600 | 0.0 | - |
| 0.8725 | 48650 | 0.0 | - |
| 0.8734 | 48700 | 0.0 | - |
| 0.8743 | 48750 | 0.0 | - |
| 0.8752 | 48800 | 0.0 | - |
| 0.8761 | 48850 | 0.0146 | - |
| 0.8770 | 48900 | 0.0 | - |
| 0.8779 | 48950 | 0.0 | - |
| 0.8788 | 49000 | 0.0 | - |
| 0.8796 | 49050 | 0.0145 | - |
| 0.8805 | 49100 | 0.0 | - |
| 0.8814 | 49150 | 0.0 | - |
| 0.8823 | 49200 | 0.0 | - |
| 0.8832 | 49250 | 0.0 | - |
| 0.8841 | 49300 | 0.0 | - |
| 0.8850 | 49350 | 0.0 | - |
| 0.8859 | 49400 | 0.0 | - |
| 0.8868 | 49450 | 0.0 | - |
| 0.8877 | 49500 | 0.0 | - |
| 0.8886 | 49550 | 0.0 | - |
| 0.8895 | 49600 | 0.0 | - |
| 0.8904 | 49650 | 0.0 | - |
| 0.8913 | 49700 | 0.0 | - |
| 0.8922 | 49750 | 0.0 | - |
| 0.8931 | 49800 | 0.0 | - |
| 0.8940 | 49850 | 0.0 | - |
| 0.8949 | 49900 | 0.0 | - |
| 0.8958 | 49950 | 0.0 | - |
| 0.8967 | 50000 | 0.0 | - |
| 0.8976 | 50050 | 0.0 | - |
| 0.8985 | 50100 | 0.0 | - |
| 0.8994 | 50150 | 0.0 | - |
| 0.9003 | 50200 | 0.0 | - |
| 0.9012 | 50250 | 0.0 | - |
| 0.9021 | 50300 | 0.0 | - |
| 0.9030 | 50350 | 0.0 | - |
| 0.9039 | 50400 | 0.0 | - |
| 0.9048 | 50450 | 0.0 | - |
| 0.9057 | 50500 | 0.0 | - |
| 0.9065 | 50550 | 0.0 | - |
| 0.9074 | 50600 | 0.0 | - |
| 0.9083 | 50650 | 0.0 | - |
| 0.9092 | 50700 | 0.0 | - |
| 0.9101 | 50750 | 0.0 | - |
| 0.9110 | 50800 | 0.0 | - |
| 0.9119 | 50850 | 0.0 | - |
| 0.9128 | 50900 | 0.0 | - |
| 0.9137 | 50950 | 0.0 | - |
| 0.9146 | 51000 | 0.0 | - |
| 0.9155 | 51050 | 0.0163 | - |
| 0.9164 | 51100 | 0.0 | - |
| 0.9173 | 51150 | 0.0 | - |
| 0.9182 | 51200 | 0.0 | - |
| 0.9191 | 51250 | 0.0 | - |
| 0.9200 | 51300 | 0.0 | - |
| 0.9209 | 51350 | 0.0 | - |
| 0.9218 | 51400 | 0.0 | - |
| 0.9227 | 51450 | 0.0 | - |
| 0.9236 | 51500 | 0.0 | - |
| 0.9245 | 51550 | 0.0 | - |
| 0.9254 | 51600 | 0.0 | - |
| 0.9263 | 51650 | 0.0 | - |
| 0.9272 | 51700 | 0.0 | - |
| 0.9281 | 51750 | 0.0 | - |
| 0.9290 | 51800 | 0.0 | - |
| 0.9299 | 51850 | 0.0 | - |
| 0.9308 | 51900 | 0.0 | - |
| 0.9317 | 51950 | 0.0 | - |
| 0.9326 | 52000 | 0.0 | - |
| 0.9334 | 52050 | 0.0163 | - |
| 0.9343 | 52100 | 0.0 | - |
| 0.9352 | 52150 | 0.0 | - |
| 0.9361 | 52200 | 0.0 | - |
| 0.9370 | 52250 | 0.0 | - |
| 0.9379 | 52300 | 0.0 | - |
| 0.9388 | 52350 | 0.0 | - |
| 0.9397 | 52400 | 0.0 | - |
| 0.9406 | 52450 | 0.0 | - |
| 0.9415 | 52500 | 0.0162 | - |
| 0.9424 | 52550 | 0.0 | - |
| 0.9433 | 52600 | 0.0 | - |
| 0.9442 | 52650 | 0.0 | - |
| 0.9451 | 52700 | 0.0 | - |
| 0.9460 | 52750 | 0.0 | - |
| 0.9469 | 52800 | 0.0 | - |
| 0.9478 | 52850 | 0.0149 | - |
| 0.9487 | 52900 | 0.0 | - |
| 0.9496 | 52950 | 0.0 | - |
| 0.9505 | 53000 | 0.0 | - |
| 0.9514 | 53050 | 0.0 | - |
| 0.9523 | 53100 | 0.0 | - |
| 0.9532 | 53150 | 0.0 | - |
| 0.9541 | 53200 | 0.0 | - |
| 0.9550 | 53250 | 0.0 | - |
| 0.9559 | 53300 | 0.0 | - |
| 0.9568 | 53350 | 0.0 | - |
| 0.9577 | 53400 | 0.0 | - |
| 0.9586 | 53450 | 0.0 | - |
| 0.9595 | 53500 | 0.0 | - |
| 0.9603 | 53550 | 0.0 | - |
| 0.9612 | 53600 | 0.0 | - |
| 0.9621 | 53650 | 0.0 | - |
| 0.9630 | 53700 | 0.0 | - |
| 0.9639 | 53750 | 0.0 | - |
| 0.9648 | 53800 | 0.0 | - |
| 0.9657 | 53850 | 0.0 | - |
| 0.9666 | 53900 | 0.0 | - |
| 0.9675 | 53950 | 0.0 | - |
| 0.9684 | 54000 | 0.0 | - |
| 0.9693 | 54050 | 0.0 | - |
| 0.9702 | 54100 | 0.0 | - |
| 0.9711 | 54150 | 0.0 | - |
| 0.9720 | 54200 | 0.0 | - |
| 0.9729 | 54250 | 0.0 | - |
| 0.9738 | 54300 | 0.0 | - |
| 0.9747 | 54350 | 0.0 | - |
| 0.9756 | 54400 | 0.0 | - |
| 0.9765 | 54450 | 0.0 | - |
| 0.9774 | 54500 | 0.0 | - |
| 0.9783 | 54550 | 0.0 | - |
| 0.9792 | 54600 | 0.0 | - |
| 0.9801 | 54650 | 0.0 | - |
| 0.9810 | 54700 | 0.0 | - |
| 0.9819 | 54750 | 0.0 | - |
| 0.9828 | 54800 | 0.0 | - |
| 0.9837 | 54850 | 0.0 | - |
| 0.9846 | 54900 | 0.0 | - |
| 0.9855 | 54950 | 0.0 | - |
| 0.9864 | 55000 | 0.0 | - |
| 0.9872 | 55050 | 0.0 | - |
| 0.9881 | 55100 | 0.0156 | - |
| 0.9890 | 55150 | 0.0 | - |
| 0.9899 | 55200 | 0.0 | - |
| 0.9908 | 55250 | 0.0 | - |
| 0.9917 | 55300 | 0.0 | - |
| 0.9926 | 55350 | 0.0 | - |
| 0.9935 | 55400 | 0.0 | - |
| 0.9944 | 55450 | 0.0 | - |
| 0.9953 | 55500 | 0.0 | - |
| 0.9962 | 55550 | 0.0 | - |
| 0.9971 | 55600 | 0.0 | - |
| 0.9980 | 55650 | 0.0 | - |
| 0.9989 | 55700 | 0.0 | - |
| 0.9998 | 55750 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- spaCy: 3.7.4
- Transformers: 4.40.1
- PyTorch: 2.2.1+cu121
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | {"library_name": "setfit", "tags": ["setfit", "absa", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "base_model": "sentence-transformers/all-MiniLM-L6-v2", "widget": [{"text": "netbook:I am not going to sit here and complain about it not having a cd drive and what not because it is a netbook, it is made to be compact and if you want all the other stuff get a laptop."}, {"text": "price:I finally decided on this laptop because it was the right price for what I need it."}, {"text": "shipped:This laptop looked brand new and was shipped very quickly."}, {"text": "business:They offer the best warranty in the business, and don't 3rd party it out like Toshiba."}, {"text": "email:My husband uses it mostly for games, email and music."}], "pipeline_tag": "text-classification", "inference": false, "model-index": [{"name": "SetFit Aspect Model with sentence-transformers/all-MiniLM-L6-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.8947936336660373, "name": "Accuracy"}]}]}]} | marcelomoreno26/all-MiniLM-L6-v2-absa-aspect | null | [
"setfit",
"safetensors",
"bert",
"absa",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"model-index",
"region:us"
] | null | 2024-05-02T01:36:33+00:00 |
null | null |
# Marcoro14-7B-slerp
Marcoro14-7B-slerp is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [Rimyy/Llama-2-7b-chat-finetuneGSMdata](https://huggingface.co/Rimyy/Llama-2-7b-chat-finetuneGSMdata)
* [Rimyy/Gemma-2b-finetuneGSMdata5ep](https://huggingface.co/Rimyy/Gemma-2b-finetuneGSMdata5ep)
## π§© Configuration
\```yaml
slices:
- sources:
- model: Rimyy/Llama-2-7b-chat-finetuneGSMdata
layer_range: [0, 10]
- model: Rimyy/Gemma-2b-finetuneGSMdata5ep
layer_range: [0, 10]
merge_method: slerp
base_model: Rimyy/Llama-2-7b-chat-finetuneGSMdata
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
\``` | {"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "Rimyy/Llama-2-7b-chat-finetuneGSMdata", "Rimyy/Gemma-2b-finetuneGSMdata5ep"]} | Rimyy/Marcoro14-7B-slerp | null | [
"merge",
"mergekit",
"lazymergekit",
"Rimyy/Llama-2-7b-chat-finetuneGSMdata",
"Rimyy/Gemma-2b-finetuneGSMdata5ep",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T01:36:34+00:00 |
text-classification | setfit |
# SetFit Polarity Model with sentence-transformers/all-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of classifying aspect polarities.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. Use a SetFit model to filter these possible aspect span candidates.
3. **Use this SetFit model to classify the filtered aspect span candidates.**
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **spaCy Model:** en_core_web_sm
- **SetFitABSA Aspect Model:** [setfit-absa-aspect](https://huggingface.co/setfit-absa-aspect)
- **SetFitABSA Polarity Model:** [marcelomoreno26/all-mpnet-base-v2-absa-polarity2](https://huggingface.co/marcelomoreno26/all-mpnet-base-v2-absa-polarity2)
- **Maximum Sequence Length:** 384 tokens
- **Number of Classes:** 4 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:---------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| neutral | <ul><li>'skip taking the cord with me because:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'The tech guy then said the:The tech guy then said the service center does not do 1-to-1 exchange and I have to direct my concern to the "sales" team, which is the retail shop which I bought my netbook from.'</li><li>'all dark, power light steady, hard:\xa0One night I turned the freaking thing off after using it, the next day I turn it on, no GUI, screen all dark, power light steady, hard drive light steady and not flashing as it usually does.'</li></ul> |
| positive | <ul><li>'of the good battery life.:I charge it at night and skip taking the cord with me because of the good battery life.'</li><li>'is of high quality, has a:it is of high quality, has a killer GUI, is extremely stable, is highly expandable, is bundled with lots of very good applications, is easy to use, and is absolutely gorgeous.'</li><li>'has a killer GUI, is extremely:it is of high quality, has a killer GUI, is extremely stable, is highly expandable, is bundled with lots of very good applications, is easy to use, and is absolutely gorgeous.'</li></ul> |
| negative | <ul><li>'then said the service center does not do:The tech guy then said the service center does not do 1-to-1 exchange and I have to direct my concern to the "sales" team, which is the retail shop which I bought my netbook from.'</li><li>'concern to the "sales" team, which is:The tech guy then said the service center does not do 1-to-1 exchange and I have to direct my concern to the "sales" team, which is the retail shop which I bought my netbook from.'</li><li>'on, no GUI, screen all:\xa0One night I turned the freaking thing off after using it, the next day I turn it on, no GUI, screen all dark, power light steady, hard drive light steady and not flashing as it usually does.'</li></ul> |
| conflict | <ul><li>'-No backlit keyboard, but not:-No backlit keyboard, but not an issue for me.'</li><li>"to replace the battery once, but:I did have to replace the battery once, but that was only a couple months ago and it's been working perfect ever since."</li><li>'Yes, they cost more, but:Yes, they cost more, but they more than make up for it in speed, construction quality, and longevity.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.7788 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import AbsaModel
# Download from the π€ Hub
model = AbsaModel.from_pretrained(
"setfit-absa-aspect",
"marcelomoreno26/all-mpnet-base-v2-absa-polarity2",
)
# Run inference
preds = model("The food was great, but the venue is just way too busy.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 3 | 24.3447 | 80 |
| Label | Training Sample Count |
|:---------|:----------------------|
| negative | 235 |
| neutral | 127 |
| positive | 271 |
### Training Hyperparameters
- batch_size: (16, 2)
- num_epochs: (1, 16)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.3333 | 1 | 0.3749 | - |
| 0.0030 | 50 | 0.3097 | - |
| 0.0059 | 100 | 0.2214 | - |
| 0.0089 | 150 | 0.2125 | - |
| 0.0119 | 200 | 0.3202 | - |
| 0.0148 | 250 | 0.1878 | - |
| 0.0178 | 300 | 0.1208 | - |
| 0.0208 | 350 | 0.2414 | - |
| 0.0237 | 400 | 0.1961 | - |
| 0.0267 | 450 | 0.0607 | - |
| 0.0296 | 500 | 0.1103 | - |
| 0.0326 | 550 | 0.1213 | - |
| 0.0356 | 600 | 0.0972 | - |
| 0.0385 | 650 | 0.0124 | - |
| 0.0415 | 700 | 0.0151 | - |
| 0.0445 | 750 | 0.1517 | - |
| 0.0474 | 800 | 0.004 | - |
| 0.0504 | 850 | 0.0204 | - |
| 0.0534 | 900 | 0.0541 | - |
| 0.0563 | 950 | 0.003 | - |
| 0.0593 | 1000 | 0.0008 | - |
| 0.0623 | 1050 | 0.0703 | - |
| 0.0652 | 1100 | 0.0013 | - |
| 0.0682 | 1150 | 0.0007 | - |
| 0.0712 | 1200 | 0.0009 | - |
| 0.0741 | 1250 | 0.0004 | - |
| 0.0771 | 1300 | 0.0004 | - |
| 0.0801 | 1350 | 0.0005 | - |
| 0.0830 | 1400 | 0.0006 | - |
| 0.0860 | 1450 | 0.0004 | - |
| 0.0889 | 1500 | 0.0002 | - |
| 0.0919 | 1550 | 0.0002 | - |
| 0.0949 | 1600 | 0.0001 | - |
| 0.0978 | 1650 | 0.0006 | - |
| 0.1008 | 1700 | 0.0002 | - |
| 0.1038 | 1750 | 0.0012 | - |
| 0.1067 | 1800 | 0.0008 | - |
| 0.1097 | 1850 | 0.0048 | - |
| 0.1127 | 1900 | 0.0007 | - |
| 0.1156 | 1950 | 0.0001 | - |
| 0.1186 | 2000 | 0.0001 | - |
| 0.1216 | 2050 | 0.0001 | - |
| 0.1245 | 2100 | 0.0001 | - |
| 0.1275 | 2150 | 0.0001 | - |
| 0.1305 | 2200 | 0.0001 | - |
| 0.1334 | 2250 | 0.0 | - |
| 0.1364 | 2300 | 0.0001 | - |
| 0.1394 | 2350 | 0.0002 | - |
| 0.1423 | 2400 | 0.0 | - |
| 0.1453 | 2450 | 0.0 | - |
| 0.1482 | 2500 | 0.0589 | - |
| 0.1512 | 2550 | 0.0036 | - |
| 0.1542 | 2600 | 0.0013 | - |
| 0.1571 | 2650 | 0.0 | - |
| 0.1601 | 2700 | 0.0001 | - |
| 0.1631 | 2750 | 0.0004 | - |
| 0.1660 | 2800 | 0.0 | - |
| 0.1690 | 2850 | 0.0002 | - |
| 0.1720 | 2900 | 0.0096 | - |
| 0.1749 | 2950 | 0.0 | - |
| 0.1779 | 3000 | 0.0 | - |
| 0.1809 | 3050 | 0.0001 | - |
| 0.1838 | 3100 | 0.0 | - |
| 0.1868 | 3150 | 0.0001 | - |
| 0.1898 | 3200 | 0.0001 | - |
| 0.1927 | 3250 | 0.0 | - |
| 0.1957 | 3300 | 0.0 | - |
| 0.1986 | 3350 | 0.0001 | - |
| 0.2016 | 3400 | 0.0 | - |
| 0.2046 | 3450 | 0.0002 | - |
| 0.2075 | 3500 | 0.0 | - |
| 0.2105 | 3550 | 0.0 | - |
| 0.2135 | 3600 | 0.0001 | - |
| 0.2164 | 3650 | 0.0 | - |
| 0.2194 | 3700 | 0.0 | - |
| 0.2224 | 3750 | 0.0001 | - |
| 0.2253 | 3800 | 0.0 | - |
| 0.2283 | 3850 | 0.0 | - |
| 0.2313 | 3900 | 0.0 | - |
| 0.2342 | 3950 | 0.0 | - |
| 0.2372 | 4000 | 0.0 | - |
| 0.2402 | 4050 | 0.0 | - |
| 0.2431 | 4100 | 0.0 | - |
| 0.2461 | 4150 | 0.0 | - |
| 0.2491 | 4200 | 0.0 | - |
| 0.2520 | 4250 | 0.0 | - |
| 0.2550 | 4300 | 0.0 | - |
| 0.2579 | 4350 | 0.0 | - |
| 0.2609 | 4400 | 0.0 | - |
| 0.2639 | 4450 | 0.0 | - |
| 0.2668 | 4500 | 0.0 | - |
| 0.2698 | 4550 | 0.0 | - |
| 0.2728 | 4600 | 0.0 | - |
| 0.2757 | 4650 | 0.0 | - |
| 0.2787 | 4700 | 0.0 | - |
| 0.2817 | 4750 | 0.0 | - |
| 0.2846 | 4800 | 0.0 | - |
| 0.2876 | 4850 | 0.0001 | - |
| 0.2906 | 4900 | 0.0071 | - |
| 0.2935 | 4950 | 0.1151 | - |
| 0.2965 | 5000 | 0.0055 | - |
| 0.2995 | 5050 | 0.0005 | - |
| 0.3024 | 5100 | 0.0041 | - |
| 0.3054 | 5150 | 0.0001 | - |
| 0.3083 | 5200 | 0.0003 | - |
| 0.3113 | 5250 | 0.0001 | - |
| 0.3143 | 5300 | 0.0 | - |
| 0.3172 | 5350 | 0.0001 | - |
| 0.3202 | 5400 | 0.0 | - |
| 0.3232 | 5450 | 0.0 | - |
| 0.3261 | 5500 | 0.0 | - |
| 0.3291 | 5550 | 0.0 | - |
| 0.3321 | 5600 | 0.0 | - |
| 0.3350 | 5650 | 0.0 | - |
| 0.3380 | 5700 | 0.0 | - |
| 0.3410 | 5750 | 0.0 | - |
| 0.3439 | 5800 | 0.0 | - |
| 0.3469 | 5850 | 0.0 | - |
| 0.3499 | 5900 | 0.0 | - |
| 0.3528 | 5950 | 0.0 | - |
| 0.3558 | 6000 | 0.0 | - |
| 0.3588 | 6050 | 0.0 | - |
| 0.3617 | 6100 | 0.0 | - |
| 0.3647 | 6150 | 0.0 | - |
| 0.3676 | 6200 | 0.0 | - |
| 0.3706 | 6250 | 0.0 | - |
| 0.3736 | 6300 | 0.0 | - |
| 0.3765 | 6350 | 0.0 | - |
| 0.3795 | 6400 | 0.0 | - |
| 0.3825 | 6450 | 0.0 | - |
| 0.3854 | 6500 | 0.0 | - |
| 0.3884 | 6550 | 0.0 | - |
| 0.3914 | 6600 | 0.0 | - |
| 0.3943 | 6650 | 0.0 | - |
| 0.3973 | 6700 | 0.0 | - |
| 0.4003 | 6750 | 0.0 | - |
| 0.4032 | 6800 | 0.0 | - |
| 0.4062 | 6850 | 0.0 | - |
| 0.4092 | 6900 | 0.0 | - |
| 0.4121 | 6950 | 0.0 | - |
| 0.4151 | 7000 | 0.0 | - |
| 0.4181 | 7050 | 0.0 | - |
| 0.4210 | 7100 | 0.0 | - |
| 0.4240 | 7150 | 0.0 | - |
| 0.4269 | 7200 | 0.0 | - |
| 0.4299 | 7250 | 0.0 | - |
| 0.4329 | 7300 | 0.0 | - |
| 0.4358 | 7350 | 0.0 | - |
| 0.4388 | 7400 | 0.0 | - |
| 0.4418 | 7450 | 0.0 | - |
| 0.4447 | 7500 | 0.0 | - |
| 0.4477 | 7550 | 0.0 | - |
| 0.4507 | 7600 | 0.0 | - |
| 0.4536 | 7650 | 0.0003 | - |
| 0.4566 | 7700 | 0.0 | - |
| 0.4596 | 7750 | 0.0 | - |
| 0.4625 | 7800 | 0.0 | - |
| 0.4655 | 7850 | 0.0 | - |
| 0.4685 | 7900 | 0.0 | - |
| 0.4714 | 7950 | 0.0 | - |
| 0.4744 | 8000 | 0.0 | - |
| 0.4773 | 8050 | 0.0 | - |
| 0.4803 | 8100 | 0.0 | - |
| 0.4833 | 8150 | 0.0 | - |
| 0.4862 | 8200 | 0.0 | - |
| 0.4892 | 8250 | 0.0 | - |
| 0.4922 | 8300 | 0.0 | - |
| 0.4951 | 8350 | 0.0 | - |
| 0.4981 | 8400 | 0.0 | - |
| 0.5011 | 8450 | 0.0 | - |
| 0.5040 | 8500 | 0.0 | - |
| 0.5070 | 8550 | 0.0 | - |
| 0.5100 | 8600 | 0.0 | - |
| 0.5129 | 8650 | 0.0 | - |
| 0.5159 | 8700 | 0.0 | - |
| 0.5189 | 8750 | 0.0 | - |
| 0.5218 | 8800 | 0.0 | - |
| 0.5248 | 8850 | 0.0 | - |
| 0.5278 | 8900 | 0.0 | - |
| 0.5307 | 8950 | 0.0 | - |
| 0.5337 | 9000 | 0.0 | - |
| 0.5366 | 9050 | 0.0 | - |
| 0.5396 | 9100 | 0.0 | - |
| 0.5426 | 9150 | 0.0 | - |
| 0.5455 | 9200 | 0.0 | - |
| 0.5485 | 9250 | 0.0 | - |
| 0.5515 | 9300 | 0.0 | - |
| 0.5544 | 9350 | 0.0 | - |
| 0.5574 | 9400 | 0.0 | - |
| 0.5604 | 9450 | 0.0 | - |
| 0.5633 | 9500 | 0.0 | - |
| 0.5663 | 9550 | 0.0 | - |
| 0.5693 | 9600 | 0.0 | - |
| 0.5722 | 9650 | 0.0 | - |
| 0.5752 | 9700 | 0.0 | - |
| 0.5782 | 9750 | 0.0 | - |
| 0.5811 | 9800 | 0.0 | - |
| 0.5841 | 9850 | 0.0 | - |
| 0.5870 | 9900 | 0.0 | - |
| 0.5900 | 9950 | 0.0 | - |
| 0.5930 | 10000 | 0.0 | - |
| 0.5959 | 10050 | 0.0 | - |
| 0.5989 | 10100 | 0.0 | - |
| 0.6019 | 10150 | 0.0 | - |
| 0.6048 | 10200 | 0.0 | - |
| 0.6078 | 10250 | 0.0 | - |
| 0.6108 | 10300 | 0.0 | - |
| 0.6137 | 10350 | 0.0 | - |
| 0.6167 | 10400 | 0.0 | - |
| 0.6197 | 10450 | 0.0 | - |
| 0.6226 | 10500 | 0.0 | - |
| 0.6256 | 10550 | 0.0 | - |
| 0.6286 | 10600 | 0.0 | - |
| 0.6315 | 10650 | 0.0 | - |
| 0.6345 | 10700 | 0.0 | - |
| 0.6375 | 10750 | 0.0 | - |
| 0.6404 | 10800 | 0.0 | - |
| 0.6434 | 10850 | 0.0 | - |
| 0.6463 | 10900 | 0.0 | - |
| 0.6493 | 10950 | 0.0 | - |
| 0.6523 | 11000 | 0.0 | - |
| 0.6552 | 11050 | 0.0 | - |
| 0.6582 | 11100 | 0.0 | - |
| 0.6612 | 11150 | 0.0 | - |
| 0.6641 | 11200 | 0.0 | - |
| 0.6671 | 11250 | 0.0 | - |
| 0.6701 | 11300 | 0.0 | - |
| 0.6730 | 11350 | 0.0 | - |
| 0.6760 | 11400 | 0.0 | - |
| 0.6790 | 11450 | 0.0 | - |
| 0.6819 | 11500 | 0.0 | - |
| 0.6849 | 11550 | 0.0 | - |
| 0.6879 | 11600 | 0.0 | - |
| 0.6908 | 11650 | 0.0 | - |
| 0.6938 | 11700 | 0.0 | - |
| 0.6968 | 11750 | 0.0 | - |
| 0.6997 | 11800 | 0.0 | - |
| 0.7027 | 11850 | 0.0 | - |
| 0.7056 | 11900 | 0.0 | - |
| 0.7086 | 11950 | 0.0 | - |
| 0.7116 | 12000 | 0.0 | - |
| 0.7145 | 12050 | 0.0 | - |
| 0.7175 | 12100 | 0.0 | - |
| 0.7205 | 12150 | 0.0 | - |
| 0.7234 | 12200 | 0.0 | - |
| 0.7264 | 12250 | 0.0 | - |
| 0.7294 | 12300 | 0.0 | - |
| 0.7323 | 12350 | 0.0 | - |
| 0.7353 | 12400 | 0.0 | - |
| 0.7383 | 12450 | 0.0 | - |
| 0.7412 | 12500 | 0.0 | - |
| 0.7442 | 12550 | 0.0 | - |
| 0.7472 | 12600 | 0.0 | - |
| 0.7501 | 12650 | 0.0 | - |
| 0.7531 | 12700 | 0.0 | - |
| 0.7560 | 12750 | 0.0 | - |
| 0.7590 | 12800 | 0.0 | - |
| 0.7620 | 12850 | 0.0 | - |
| 0.7649 | 12900 | 0.0 | - |
| 0.7679 | 12950 | 0.0 | - |
| 0.7709 | 13000 | 0.0 | - |
| 0.7738 | 13050 | 0.0 | - |
| 0.7768 | 13100 | 0.0 | - |
| 0.7798 | 13150 | 0.0 | - |
| 0.7827 | 13200 | 0.0 | - |
| 0.7857 | 13250 | 0.0 | - |
| 0.7887 | 13300 | 0.0 | - |
| 0.7916 | 13350 | 0.0 | - |
| 0.7946 | 13400 | 0.0 | - |
| 0.7976 | 13450 | 0.0 | - |
| 0.8005 | 13500 | 0.0 | - |
| 0.8035 | 13550 | 0.0 | - |
| 0.8065 | 13600 | 0.0 | - |
| 0.8094 | 13650 | 0.0 | - |
| 0.8124 | 13700 | 0.0 | - |
| 0.8153 | 13750 | 0.0 | - |
| 0.8183 | 13800 | 0.0 | - |
| 0.8213 | 13850 | 0.0 | - |
| 0.8242 | 13900 | 0.0 | - |
| 0.8272 | 13950 | 0.0 | - |
| 0.8302 | 14000 | 0.0 | - |
| 0.8331 | 14050 | 0.0 | - |
| 0.8361 | 14100 | 0.0 | - |
| 0.8391 | 14150 | 0.0 | - |
| 0.8420 | 14200 | 0.0 | - |
| 0.8450 | 14250 | 0.0 | - |
| 0.8480 | 14300 | 0.0 | - |
| 0.8509 | 14350 | 0.0 | - |
| 0.8539 | 14400 | 0.0 | - |
| 0.8569 | 14450 | 0.0 | - |
| 0.8598 | 14500 | 0.0 | - |
| 0.8628 | 14550 | 0.0 | - |
| 0.8657 | 14600 | 0.0 | - |
| 0.8687 | 14650 | 0.0 | - |
| 0.8717 | 14700 | 0.0 | - |
| 0.8746 | 14750 | 0.0 | - |
| 0.8776 | 14800 | 0.0 | - |
| 0.8806 | 14850 | 0.0 | - |
| 0.8835 | 14900 | 0.0 | - |
| 0.8865 | 14950 | 0.0 | - |
| 0.8895 | 15000 | 0.0 | - |
| 0.8924 | 15050 | 0.0 | - |
| 0.8954 | 15100 | 0.0 | - |
| 0.8984 | 15150 | 0.0 | - |
| 0.9013 | 15200 | 0.0 | - |
| 0.9043 | 15250 | 0.0 | - |
| 0.9073 | 15300 | 0.0 | - |
| 0.9102 | 15350 | 0.0 | - |
| 0.9132 | 15400 | 0.0 | - |
| 0.9162 | 15450 | 0.0 | - |
| 0.9191 | 15500 | 0.0 | - |
| 0.9221 | 15550 | 0.0 | - |
| 0.9250 | 15600 | 0.0 | - |
| 0.9280 | 15650 | 0.0 | - |
| 0.9310 | 15700 | 0.0 | - |
| 0.9339 | 15750 | 0.0 | - |
| 0.9369 | 15800 | 0.0 | - |
| 0.9399 | 15850 | 0.0 | - |
| 0.9428 | 15900 | 0.0 | - |
| 0.9458 | 15950 | 0.0 | - |
| 0.9488 | 16000 | 0.0 | - |
| 0.9517 | 16050 | 0.0 | - |
| 0.9547 | 16100 | 0.0 | - |
| 0.9577 | 16150 | 0.0 | - |
| 0.9606 | 16200 | 0.0 | - |
| 0.9636 | 16250 | 0.0 | - |
| 0.9666 | 16300 | 0.0 | - |
| 0.9695 | 16350 | 0.0 | - |
| 0.9725 | 16400 | 0.0 | - |
| 0.9755 | 16450 | 0.0 | - |
| 0.9784 | 16500 | 0.0 | - |
| 0.9814 | 16550 | 0.0 | - |
| 0.9843 | 16600 | 0.0 | - |
| 0.9873 | 16650 | 0.0 | - |
| 0.9903 | 16700 | 0.0 | - |
| 0.9932 | 16750 | 0.0 | - |
| 0.9962 | 16800 | 0.0 | - |
| 0.9992 | 16850 | 0.0 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- spaCy: 3.7.4
- Transformers: 4.40.1
- PyTorch: 2.2.1+cu121
- Datasets: 2.19.0
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> | {"library_name": "setfit", "tags": ["setfit", "absa", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "metrics": ["accuracy"], "base_model": "sentence-transformers/all-mpnet-base-v2", "widget": [{"text": "Needs Power and Mouse Cable to Plug in:Needs Power and Mouse Cable to Plug in back instead of side, In the way of operating a mouse in small area."}, {"text": "wireless router via built-in wireless took no time:Connecting to my wireless router via built-in wireless took no time at all."}, {"text": "The battery life is probably an:The battery life is probably an hour at best."}, {"text": "and with free shipping and no tax:The 13\" Macbook Pro just fits in my budget and with free shipping and no tax to CA this is the best price we can get for a great product."}, {"text": "product is top quality.:The price was very good, and the product is top quality."}], "pipeline_tag": "text-classification", "inference": false, "model-index": [{"name": "SetFit Polarity Model with sentence-transformers/all-mpnet-base-v2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "Unknown", "type": "unknown", "split": "test"}, "metrics": [{"type": "accuracy", "value": 0.7788235294117647, "name": "Accuracy"}]}]}]} | marcelomoreno26/all-mpnet-base-v2-absa-polarity | null | [
"setfit",
"safetensors",
"mpnet",
"absa",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-mpnet-base-v2",
"model-index",
"region:us"
] | null | 2024-05-02T01:36:46+00:00 |
null | null | {} | melik1/arthurmorgan | null | [
"region:us"
] | null | 2024-05-02T01:36:51+00:00 |
|
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_train_InstructionN1_SOAPL_v1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_InstructionN1_SOAPL_v1", "results": []}]} | ThuyNT/CS505_COQE_viT5_train_InstructionN1_SOAPL_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:36:52+00:00 |
text-generation | peft |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "peft", "tags": ["unsloth", "trl", "sft"], "pipeline_tag": "text-generation", "widget": [{"text": "You will assist me in generating MCQ questions along with their Answers and Choices. Please use the next context to guide you generating of MCQ questions ### Context: : The advantages of Eating Fruits & Vegetables Vegetables and fruit are good for your health because of a lot of vitamins, nutrients and fiber in them. Fruits and vegetables are regarded as a healthy diet without adding chemicals or additives for carrying or storing foods. The United States Department of Agriculture suggests eating plenty of fruits and vegetables every day to get the advantages of eating a healthy diet. Disease Prevention The United States Department of Agriculture tells that eating fruits and vegetables helps fight strokes, cardiovascular disease and type 2 diabetes. Because of the potassium in fruits and vegetables, they can prevent the risk of developing kidney stones. The fiber in fruits and vegetables helps the body fight against coronary heart disease. Idea Health Since fruits and vegetables add to a balanced and healthy diet, eating these will help you with your idea. The vitamins in fruits and vegetables helps fight depress , according to the Mental Health Foundation. Vitamin B and folic acid found in some vegetables and bananas help you to relax. Eating fruits and vegetables can also help improve memory. Healthy Weight Eating fruits and vegetables helps you have a healthy weight. Fruits and vegetables naturally include less energy than other foods, according to the Center for Disease Control and Prevention. The fiber and water in the fruits and vegetables make you feel that you are full, so you will not feel that you have to eat more food.", "example_title": "Q1"}]} | shredder-31/Llamma-3_QG_V.1.0 | null | [
"peft",
"pytorch",
"safetensors",
"llama",
"unsloth",
"trl",
"sft",
"text-generation",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T01:37:36+00:00 |
null | null |
# Quant Infos
- Includes recent bpe pre-tokenizer fixes https://github.com/ggerganov/llama.cpp/pull/6920
- quants done with an importance matrix for improved quantization loss
- K & IQ quants in basically all variants from Q6_K down to IQ1_S
- Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [8d608a81b7bd170f700648f8214e6f3279d4d715](https://github.com/ggerganov/llama.cpp/commit/8d608a81b7bd170f700648f8214e6f3279d4d715) (master from 2024-05-01)
- Imatrix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) dataset.
```
./imatrix -c 512 -m $model_name-f16.gguf -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat
```
# Original Model Card:
# Hermes 2 Pro - Llama-3 8B

## Model Description
Hermes 2 Pro is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, as well as a newly introduced Function Calling and JSON Mode dataset developed in-house.
This new version of Hermes maintains its excellent general task and conversation capabilities - but also excels at Function Calling, JSON Structured Outputs, and has improved on several other metrics as well, scoring a 90% on our function calling evaluation built in partnership with Fireworks.AI, and an 84% on our structured JSON Output evaluation.
Hermes Pro takes advantage of a special system prompt and multi-turn function calling structure with a new chatml role in order to make function calling reliable and easy to parse. Learn more about prompting below.
This version of Hermes 2 Pro adds several tokens to assist with agentic capabilities in parsing while streaming tokens - `<tools>`, `<tool_call>`, `<tool_response>` and their closing tags are single tokens now.
This work was a collaboration between Nous Research, @interstellarninja, and Fireworks.AI
Learn more about the function calling system for this model on our github repo here: https://github.com/NousResearch/Hermes-Function-Calling
## Example Outputs
### Ask for a structured JSON output:

### Write the plot for a story where anime became real life:

### Coding Assistance

# Prompt Format
Hermes 2 Pro uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model.
This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns.
This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI.
Prompt with system instruction (Use whatever system prompt you like, this is just an example!):
```
<|im_start|>system
You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|>
<|im_start|>user
Hello, who are you?<|im_end|>
<|im_start|>assistant
Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|>
```
This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the
`tokenizer.apply_chat_template()` method:
```python
messages = [
{"role": "system", "content": "You are Hermes 2."},
{"role": "user", "content": "Hello, who are you?"}
]
gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt")
model.generate(**gen_input)
```
When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure
that the model continues with an assistant response.
To utilize the prompt format without a system prompt, simply leave the line out.
## Prompt Format for Function Calling
Our model was trained on specific system prompts and structures for Function Calling.
You should use the system role with this message, followed by a function signature json as this example shows here.
```
<|im_start|>system
You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows:
<tool_call>
{"arguments": <args-dict>, "name": <function-name>}
</tool_call><|im_end|>
```
To complete the function call, create a user prompt that follows the above system prompt, like so:
```
<|im_start|>user
Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|>
```
The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling):
```
<|im_start|>assistant
<tool_call>
{"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"}
</tool_call><|im_end|>
```
Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so:
```
<|im_start|>tool
<tool_response>
{"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}}
</tool_response>
<|im_end|>
```
The assistant will then read in that data from the function's response, and generate a natural language response:
```
<|im_start|>assistant
The stock fundamentals data for Tesla (TSLA) are as follows:
- **Symbol**: TSLA
- **Company Name**: Tesla, Inc.
- **Sector**: Consumer Cyclical
- **Industry**: Auto Manufacturers
- **Market Capitalization**: $566,160,130,480
- **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73
- **Price-to-Book Ratio (PB Ratio)**: 9.04
- **Dividend Yield**: N/A
- **Trailing Earnings Per Share (EPS)**: $4.3
- **Beta Value of the Stock**: 2.42
- **52-Week High Price of the Stock**: $299.29
- **52-Week Low Price of the Stock**: $152.37
This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|>
```
## Prompt Format for JSON Mode / Structured Outputs
Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema.
Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main
```
<|im_start|>system
You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|>
```
Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON.
# Benchmarks

## GPT4All:
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.5520|Β± |0.0145|
| | |acc_norm|0.5887|Β± |0.0144|
|arc_easy | 0|acc |0.8350|Β± |0.0076|
| | |acc_norm|0.8123|Β± |0.0080|
|boolq | 1|acc |0.8584|Β± |0.0061|
|hellaswag | 0|acc |0.6265|Β± |0.0048|
| | |acc_norm|0.8053|Β± |0.0040|
|openbookqa | 0|acc |0.3800|Β± |0.0217|
| | |acc_norm|0.4580|Β± |0.0223|
|piqa | 0|acc |0.8003|Β± |0.0093|
| | |acc_norm|0.8118|Β± |0.0091|
|winogrande | 0|acc |0.7490|Β± |0.0122|
```
Average: 72.62
## AGIEval:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2520|Β± |0.0273|
| | |acc_norm|0.2559|Β± |0.0274|
|agieval_logiqa_en | 0|acc |0.3548|Β± |0.0188|
| | |acc_norm|0.3625|Β± |0.0189|
|agieval_lsat_ar | 0|acc |0.1826|Β± |0.0255|
| | |acc_norm|0.1913|Β± |0.0260|
|agieval_lsat_lr | 0|acc |0.5510|Β± |0.0220|
| | |acc_norm|0.5255|Β± |0.0221|
|agieval_lsat_rc | 0|acc |0.6431|Β± |0.0293|
| | |acc_norm|0.6097|Β± |0.0298|
|agieval_sat_en | 0|acc |0.7330|Β± |0.0309|
| | |acc_norm|0.7039|Β± |0.0319|
|agieval_sat_en_without_passage| 0|acc |0.4029|Β± |0.0343|
| | |acc_norm|0.3689|Β± |0.0337|
|agieval_sat_math | 0|acc |0.3909|Β± |0.0330|
| | |acc_norm|0.3773|Β± |0.0328|
```
Average: 42.44
## BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5737|Β± |0.0360|
|bigbench_date_understanding | 0|multiple_choice_grade|0.6667|Β± |0.0246|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3178|Β± |0.0290|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.1755|Β± |0.0201|
| | |exact_str_match |0.0000|Β± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.3120|Β± |0.0207|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.2014|Β± |0.0152|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.5500|Β± |0.0288|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.4300|Β± |0.0222|
|bigbench_navigate | 0|multiple_choice_grade|0.4980|Β± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.7010|Β± |0.0102|
|bigbench_ruin_names | 0|multiple_choice_grade|0.4688|Β± |0.0236|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.1974|Β± |0.0126|
|bigbench_snarks | 0|multiple_choice_grade|0.7403|Β± |0.0327|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.5426|Β± |0.0159|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.5320|Β± |0.0158|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.2280|Β± |0.0119|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1531|Β± |0.0086|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.5500|Β± |0.0288|
```
Average: 43.55
## TruthfulQA:
```
| Task |Version|Metric|Value| |Stderr|
|-------------|------:|------|----:|---|-----:|
|truthfulqa_mc| 1|mc1 |0.410|Β± |0.0172|
| | |mc2 |0.578|Β± |0.0157|
```
# Inference Code
Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM)
Note: To use function calling, you should see the github repo above.
```python
# Code to inference Hermes with HF Transformers
# Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM
import bitsandbytes, flash_attn
tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Pro-Llama-3-8B', trust_remote_code=True)
model = LlamaForCausalLM.from_pretrained(
"Hermes-2-Pro-Llama-3-8B",
torch_dtype=torch.float16,
device_map="auto",
load_in_8bit=False,
load_in_4bit=True,
use_flash_attention_2=True
)
prompts = [
"""<|im_start|>system
You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|>
<|im_start|>user
Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|>
<|im_start|>assistant""",
]
for chat in prompts:
print(chat)
input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda")
generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id)
response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True)
print(f"Response: {response}")
```
## Inference Code for Function Calling:
All code for utilizing, parsing, and building function calling templates is available on our github:
[https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling)

# Chat Interfaces
When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box.
In LM-Studio, simply select the ChatML Prefix on the settings side pane:

## Quantized Versions:
GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B-GGUF
# How to cite:
```bibtext
@misc{Hermes-2-Pro-Llama-3-8B,
url={[https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B]https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B)},
title={Hermes-2-Pro-Llama-3-8B},
author={"Teknium", "interstellarninja", "theemozilla", "karan4d", "huemin_art"}
}
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "gguf", "imatrix", "importance matrix"], "base_model": "NousResearch/Hermes-2-Pro-Llama-3-8B", "model-index": [{"name": "Hermes-2-Pro-Llama-3-8B", "results": []}]} | qwp4w3hyb/Hermes-2-Pro-Llama-3-8B-iMat-GGUF | null | [
"gguf",
"Llama-3",
"instruct",
"finetune",
"chatml",
"DPO",
"RLHF",
"gpt4",
"synthetic data",
"distillation",
"function calling",
"json mode",
"imatrix",
"importance matrix",
"en",
"base_model:NousResearch/Hermes-2-Pro-Llama-3-8B",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T01:38:20+00:00 |
text-generation | transformers | {} | Zekunli/Llama-2-7b-mlp2x-mmncphysical-mmc4core | null | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:38:59+00:00 |
|
text-generation | transformers | {"license": "mit"} | TCleo/SriRatuTani-Indonesia-7B | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:42:07+00:00 |
|
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/mlabonne/ChimeraLlama-3-8B-v3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ChimeraLlama-3-8B-v3-GGUF/resolve/main/ChimeraLlama-3-8B-v3.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "license": "other", "library_name": "transformers", "tags": ["merge", "mergekit", "lazymergekit"], "base_model": "mlabonne/ChimeraLlama-3-8B-v3", "quantized_by": "mradermacher"} | mradermacher/ChimeraLlama-3-8B-v3-GGUF | null | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"en",
"base_model:mlabonne/ChimeraLlama-3-8B-v3",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T01:42:16+00:00 |
text-generation | transformers |

# flammen23-mistral-7B
A Mistral 7B LLM built from merging pretrained models and finetuning on [nbeerbower/bible-dpo](https://huggingface.co/datasets/nbeerbower/bible-dpo).
Flammen specializes in exceptional character roleplay, creative writing, and general intelligence
### Method
Finetuned using an A100 on Google Colab.
[Fine-tune a Mistral-7b model with Direct Preference Optimization](https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac) - [Maxime Labonne](https://huggingface.co/mlabonne)
### Configuration
System prompt, dataset formatting:
```python
def chatml_format(example):
# Format system
system = ""
systemMessage = "Recite the given verse from the Bible."
system = "<|im_start|>system\n" + systemMessage + "<|im_end|>\n"
# Format instruction
prompt = "<|im_start|>user\nRecite " + example['citation'] + "<|im_end|>\n<|im_start|>assistant\n"
# Format chosen answer
chosen = example['text'] + "<|im_end|>\n"
# Format rejected answer
rejected = example['rejected'] + "<|im_end|>\n"
return {
"prompt": system + prompt,
"chosen": chosen,
"rejected": rejected,
}
dataset = load_dataset("nbeerbower/bible-dpo")['train']
# Save columns
original_columns = dataset.column_names
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "left"
# Format dataset
dataset = dataset.map(
chatml_format,
remove_columns=original_columns
)
```
LoRA, model, and training settings:
```python
# LoRA configuration
peft_config = LoraConfig(
r=16,
lora_alpha=16,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
)
# Model to fine-tune
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
model.config.use_cache = False
# Reference model
ref_model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
load_in_4bit=True
)
# Training arguments
training_args = TrainingArguments(
per_device_train_batch_size=4,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
learning_rate=5e-5,
lr_scheduler_type="cosine",
max_steps=5000,
save_strategy="no",
logging_steps=1,
output_dir=new_model,
optim="paged_adamw_32bit",
warmup_steps=100,
bf16=True,
report_to="wandb",
)
# Create DPO trainer
dpo_trainer = DPOTrainer(
model,
ref_model,
args=training_args,
train_dataset=dataset,
tokenizer=tokenizer,
peft_config=peft_config,
beta=0.1,
max_prompt_length=512,
max_length=1536,
force_use_ref_model=True
)
# Fine-tune model with DPO
dpo_trainer.train()
``` | {"license": "apache-2.0", "library_name": "transformers", "datasets": ["nbeerbower/bible-dpo"], "base_model": ["flammenai/flammen22X-mistral-7B"]} | flammenai/flammen23-mistral-7B | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"dataset:nbeerbower/bible-dpo",
"base_model:flammenai/flammen22X-mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:42:29+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CS505_COQE_viT5_train_InstructionN0_SOAPL_v1
This model is a fine-tuned version of [VietAI/vit5-large](https://huggingface.co/VietAI/vit5-large) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
| {"license": "mit", "tags": ["generated_from_trainer"], "base_model": "VietAI/vit5-large", "model-index": [{"name": "CS505_COQE_viT5_train_InstructionN0_SOAPL_v1", "results": []}]} | ThuyNT/CS505_COQE_viT5_train_InstructionN0_SOAPL_v1 | null | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:VietAI/vit5-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:45:13+00:00 |
null | null | {} | Quentinlintz/my_awesome_eli5_clm-model | null | [
"region:us"
] | null | 2024-05-02T01:46:04+00:00 |
|
null | null | {} | seoma/Llama2-KorQuAD2-dev-finetuning | null | [
"region:us"
] | null | 2024-05-02T01:46:34+00:00 |
|
unconditional-image-generation | diffusers |
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Vincent-Liiiii/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
| {"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]} | Vincent-Liiiii/sd-class-butterflies-32 | null | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2024-05-02T01:46:35+00:00 |
null | null | {} | Zephiro/Extensiones | null | [
"region:us"
] | null | 2024-05-02T01:48:05+00:00 |
|
null | null | {} | Vincent-Liiiii/sd-class-butterflies-64 | null | [
"region:us"
] | null | 2024-05-02T01:48:35+00:00 |
|
null | null | {} | tmdang1101/speecht5_finetuned_voxpopuli_nl | null | [
"region:us"
] | null | 2024-05-02T01:49:04+00:00 |
|
null | null | {} | Nathan757/lanovoice1-tts-439 | null | [
"region:us"
] | null | 2024-05-02T01:50:19+00:00 |
|
null | null | {"license": "openrail"} | injastmusic/dub2 | null | [
"license:openrail",
"region:us"
] | null | 2024-05-02T01:50:46+00:00 |
|
text-generation | transformers | {} | Batool110/llama-2-7b-chat-text-to-sql | null | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:53:24+00:00 |
|
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | rxm210132/gemma_QLORA_model_emotion_detection | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T01:54:32+00:00 |
null | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Mervyn999/mistral-7b-distilabel-mini | null | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T01:54:43+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flant5-base
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3670671.36 | 0.32 | 250 | nan |
| 0.0 | 0.64 | 500 | nan |
| 0.0 | 0.96 | 750 | nan |
| 0.0 | 1.28 | 1000 | nan |
| 0.0 | 1.61 | 1250 | nan |
| 0.0 | 1.93 | 1500 | nan |
| 0.0 | 2.25 | 1750 | nan |
| 0.0 | 2.57 | 2000 | nan |
| 0.0 | 2.89 | 2250 | nan |
| 0.0 | 3.21 | 2500 | nan |
| 0.0 | 3.53 | 2750 | nan |
| 0.0 | 3.85 | 3000 | nan |
| 0.0 | 4.17 | 3250 | nan |
| 0.0 | 4.49 | 3500 | nan |
| 0.0 | 4.82 | 3750 | nan |
| 0.0 | 5.14 | 4000 | nan |
| 0.0 | 5.46 | 4250 | nan |
| 0.0 | 5.78 | 4500 | nan |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.19.0
- Tokenizers 0.15.2
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google/flan-t5-base", "model-index": [{"name": "flant5-base", "results": []}]} | tuquyennnn/flant5-base | null | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:55:55+00:00 |
text-generation | transformers | {} | m8than/test_model_forge_22b | null | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T01:56:24+00:00 |
|
null | null | {} | samhog/circle_detector | null | [
"region:us"
] | null | 2024-05-02T01:56:49+00:00 |
|
null | null | {} | dianamcm/modelo-1-ner-bilstm-crf-conll2002 | null | [
"region:us"
] | null | 2024-05-02T01:57:06+00:00 |
|
null | null | {"license": "mit", "title": "DL_model_of_flower_detection", "colorFrom": "yellow", "colorTo": "gray", "sdk": "gradio", "sdk_version": "4.28.3", "app_file": "app.py", "pinned": false} | Uzairabbasi/DL_model_of_flower_detection | null | [
"license:mit",
"region:us"
] | null | 2024-05-02T01:59:17+00:00 |
|
unconditional-image-generation | diffusers |
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('reevewu/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
| {"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]} | reevewu/sd-class-butterflies-32 | null | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2024-05-02T02:00:21+00:00 |
reinforcement-learning | null |
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
| {"tags": ["LunarLander-v2", "ppo", "deep-reinforcement-learning", "reinforcement-learning", "custom-implementation", "deep-rl-course"], "model-index": [{"name": "PPO", "results": [{"task": {"type": "reinforcement-learning", "name": "reinforcement-learning"}, "dataset": {"name": "LunarLander-v2", "type": "LunarLander-v2"}, "metrics": [{"type": "mean_reward", "value": "-94.60 +/- 18.22", "name": "mean_reward", "verified": false}]}]}]} | Joalbom14/ppo-LunarLander-v2-CleanRL | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | null | 2024-05-02T02:02:02+00:00 |
unconditional-image-generation | diffusers |
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('reevewu/sd-class-butterflies-64')
image = pipeline().images[0]
image
```
| {"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]} | reevewu/sd-class-butterflies-64 | null | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2024-05-02T02:02:29+00:00 |
null | peft |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trained_falcon_sharded_80_20_HPE
This model is a fine-tuned version of [ybelkada/falcon-7b-sharded-bf16](https://huggingface.co/ybelkada/falcon-7b-sharded-bf16) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.10.1.dev0
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1 | {"library_name": "peft", "tags": ["trl", "sft", "generated_from_trainer"], "base_model": "ybelkada/falcon-7b-sharded-bf16", "model-index": [{"name": "trained_falcon_sharded_80_20_HPE", "results": []}]} | sathwik77/trained_falcon_sharded_80_20_HPE | null | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:ybelkada/falcon-7b-sharded-bf16",
"region:us"
] | null | 2024-05-02T02:03:45+00:00 |
null | transformers | {"license": "mit"} | twainsk/Llama3-chinese-beta | null | [
"transformers",
"safetensors",
"gguf",
"llama",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T02:04:54+00:00 |
|
unconditional-image-generation | diffusers |
# Model Card for Unit 1 of the [Diffusion Models Class π§¨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute π¦.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('VickFan/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
| {"license": "mit", "tags": ["pytorch", "diffusers", "unconditional-image-generation", "diffusion-models-class"]} | VickFan/sd-class-butterflies-32 | null | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | null | 2024-05-02T02:05:46+00:00 |
text-generation | transformers |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] | {"library_name": "transformers", "tags": []} | Mervyn999/mistral-7b-distilabel-mini-SFT | null | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null | 2024-05-02T02:07:38+00:00 |
null | null | {} | VickFan/sd-class-butterflies-64 | null | [
"region:us"
] | null | 2024-05-02T02:07:42+00:00 |
|
null | transformers | ## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/chargoddard/llama-2-16b-nastychat
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/llama-2-16b-nastychat-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.Q2_K.gguf) | Q2_K | 6.1 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.IQ3_XS.gguf) | IQ3_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.IQ3_S.gguf) | IQ3_S | 7.1 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.Q3_K_S.gguf) | Q3_K_S | 7.1 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.IQ3_M.gguf) | IQ3_M | 7.5 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.Q3_K_M.gguf) | Q3_K_M | 8.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.Q3_K_L.gguf) | Q3_K_L | 8.7 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.IQ4_XS.gguf) | IQ4_XS | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.Q4_K_S.gguf) | Q4_K_S | 9.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.Q4_K_M.gguf) | Q4_K_M | 9.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.Q5_K_S.gguf) | Q5_K_S | 11.3 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.Q5_K_M.gguf) | Q5_K_M | 11.6 | |
| [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.Q6_K.gguf) | Q6_K | 13.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/llama-2-16b-nastychat-GGUF/resolve/main/llama-2-16b-nastychat.Q8_0.gguf) | Q8_0 | 17.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
| {"language": ["en"], "library_name": "transformers", "tags": ["not-for-all-audiences"], "base_model": "chargoddard/llama-2-16b-nastychat", "quantized_by": "mradermacher"} | mradermacher/llama-2-16b-nastychat-GGUF | null | [
"transformers",
"gguf",
"not-for-all-audiences",
"en",
"base_model:chargoddard/llama-2-16b-nastychat",
"endpoints_compatible",
"region:us"
] | null | 2024-05-02T02:08:10+00:00 |
null | null |
# joeshmoethefunnyone/pythia-410m-Q8_0-GGUF
This model was converted to GGUF format from [`EleutherAI/pythia-410m`](https://huggingface.co/EleutherAI/pythia-410m) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/EleutherAI/pythia-410m) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo joeshmoethefunnyone/pythia-410m-Q8_0-GGUF --model pythia-410m.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo joeshmoethefunnyone/pythia-410m-Q8_0-GGUF --model pythia-410m.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m pythia-410m.Q8_0.gguf -n 128
```
| {"language": ["en"], "license": "apache-2.0", "tags": ["pytorch", "causal-lm", "pythia", "llama-cpp", "gguf-my-repo"], "datasets": ["EleutherAI/pile"]} | joeshmoethefunnyone/pythia-410m-Q8_0-GGUF | null | [
"gguf",
"pytorch",
"causal-lm",
"pythia",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:EleutherAI/pile",
"license:apache-2.0",
"region:us"
] | null | 2024-05-02T02:08:35+00:00 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.