modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 12:29:05
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 12:27:55
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
leekh7624/model3 | leekh7624 | 2024-10-27T12:59:49Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:leekh7624/model2",
"base_model:finetune:leekh7624/model2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T12:55:36Z | ---
base_model: leekh7624/model2
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** leekh7624
- **License:** apache-2.0
- **Finetuned from model :** leekh7624/model2
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SOUMYADEEPSAR/cognitive_bias_deberta-mnli | SOUMYADEEPSAR | 2024-10-27T12:58:30Z | 5 | 0 | adapter-transformers | [
"adapter-transformers",
"deberta",
"dataset:mediabiasgroup/mbib-base",
"region:us"
] | null | 2024-10-27T12:58:28Z | ---
tags:
- deberta
- adapter-transformers
datasets:
- mediabiasgroup/mbib-base
---
# Adapter `SOUMYADEEPSAR/cognitive_bias_deberta-mnli` for microsoft/deberta-base-mnli
An [adapter](https://adapterhub.ml) for the `microsoft/deberta-base-mnli` model that was trained on the [mediabiasgroup/mbib-base](https://huggingface.co/datasets/mediabiasgroup/mbib-base/) dataset and includes a prediction head for classification.
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library.
## Usage
First, install `adapters`:
```
pip install -U adapters
```
Now, the adapter can be loaded and activated like this:
```python
from adapters import AutoAdapterModel
model = AutoAdapterModel.from_pretrained("microsoft/deberta-base-mnli")
adapter_name = model.load_adapter("SOUMYADEEPSAR/cognitive_bias_deberta-mnli", set_active=True)
```
## Architecture & Training
<!-- Add some description here -->
## Evaluation results
<!-- Add some description here -->
## Citation
<!-- Add some description here --> |
James2313123/L3-DARKEST-PLANET-Seven-Rings-Of-DOOM-16.5B_4bpw-h6-EXL2 | James2313123 | 2024-10-27T12:57:18Z | 6 | 0 | null | [
"safetensors",
"llama",
"exl2",
"4bpw",
"en",
"license:apache-2.0",
"4-bit",
"region:us"
] | null | 2024-10-27T12:22:33Z | ---
license: apache-2.0
language:
- en
base_model: DavidAU/DARKEST-PLANET-Seven-Rings-Of-DOOM-16.5B
quantized_by: James2313123
tags:
- exl2
- 4bpw
---
### Model Description
4bpw-h6-exl2 quant of DavidAU's DARKEST-PLANET-Seven-Rings-Of-DOOM-16.5B
Link to orginal model and creator: https://huggingface.co/DavidAU/L3-DARKEST-PLANET-Seven-Rings-Of-DOOM-16.5B |
mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF | mradermacher | 2024-10-27T12:48:07Z | 325 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:dyyyyyyyy/ScaleQuest-Math",
"base_model:dyyyyyyyy/ScaleQuest-DeepSeekMath-7B-QGen",
"base_model:quantized:dyyyyyyyy/ScaleQuest-DeepSeekMath-7B-QGen",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-27T12:11:25Z | ---
base_model: dyyyyyyyy/ScaleQuest-DeepSeekMath-7B-QGen
datasets:
- dyyyyyyyy/ScaleQuest-Math
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/dyyyyyyyy/ScaleQuest-DeepSeekMath-7B-QGen
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-IQ1_S.gguf) | i1-IQ1_S | 1.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-IQ2_S.gguf) | i1-IQ2_S | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-IQ3_S.gguf) | i1-IQ3_S | 3.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.1 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.1 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.1 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-Q4_0.gguf) | i1-Q4_0 | 4.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.i1-Q6_K.gguf) | i1-Q6_K | 5.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
openpecha/TTS_26102024 | openpecha | 2024-10-27T12:26:59Z | 103 | 0 | transformers | [
"transformers",
"safetensors",
"speecht5",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2024-10-27T12:19:53Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Lareb00/xlm-roberta-base-finetuned-emotion | Lareb00 | 2024-10-27T12:25:41Z | 115 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T12:24:55Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: xlm-roberta-base-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-emotion
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7499
- Accuracy: 0.665
- F1: 0.6631
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 0.992 | 62 | 0.7707 | 0.6472 | 0.6450 |
| No log | 1.984 | 124 | 0.7499 | 0.665 | 0.6631 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF | mradermacher | 2024-10-27T12:21:44Z | 292 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:dyyyyyyyy/ScaleQuest-Math",
"base_model:dyyyyyyyy/ScaleQuest-DeepSeekMath-7B-QGen",
"base_model:quantized:dyyyyyyyy/ScaleQuest-DeepSeekMath-7B-QGen",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-27T11:27:42Z | ---
base_model: dyyyyyyyy/ScaleQuest-DeepSeekMath-7B-QGen
datasets:
- dyyyyyyyy/ScaleQuest-Math
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/dyyyyyyyy/ScaleQuest-DeepSeekMath-7B-QGen
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.Q3_K_S.gguf) | Q3_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.Q3_K_L.gguf) | Q3_K_L | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.IQ4_XS.gguf) | IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.Q4_K_S.gguf) | Q4_K_S | 4.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.Q5_K_S.gguf) | Q5_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.Q6_K.gguf) | Q6_K | 5.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ScaleQuest-DeepSeekMath-7B-QGen-GGUF/resolve/main/ScaleQuest-DeepSeekMath-7B-QGen.f16.gguf) | f16 | 13.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
hadiaskari98/Software_NER_prod | hadiaskari98 | 2024-10-27T12:20:38Z | 117 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"en",
"base_model:google-bert/bert-large-cased",
"base_model:finetune:google-bert/bert-large-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-10-27T10:57:04Z | ---
license: mit
language:
- en
base_model:
- google-bert/bert-large-cased
pipeline_tag: token-classification
library_name: transformers
---
**How to use**
```from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("hadiaskari98/Software_NER_prod")
model = AutoModelForTokenClassification.from_pretrained("hadiaskari98/Software_NER_prod")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "Windows XP is an example of an operating system."
ner_results = nlp(example)
print(ner_results) |
Thtsuca/bert-base-japanese-v3-wrime-sentiment | Thtsuca | 2024-10-27T12:15:28Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T12:15:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hadiaskari98/Vulnerability_NER_prod | hadiaskari98 | 2024-10-27T12:15:10Z | 107 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"en",
"base_model:google-bert/bert-large-cased",
"base_model:finetune:google-bert/bert-large-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-10-27T10:59:27Z | ---
license: mit
language:
- en
base_model:
- google-bert/bert-large-cased
pipeline_tag: token-classification
library_name: transformers
---
**How to use**
```from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("hadiaskari98/Vulnerability_NER_prod")
model = AutoModelForTokenClassification.from_pretrained("hadiaskari98/Vulnerability_NER_prod")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "This is an example of a SQL Injection attack"
ner_results = nlp(example)
print(ner_results)
|
hadiaskari98/Hardware_NER_prod | hadiaskari98 | 2024-10-27T12:13:49Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"en",
"base_model:google-bert/bert-large-cased",
"base_model:finetune:google-bert/bert-large-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-10-27T10:55:32Z | ---
license: mit
language:
- en
base_model:
- google-bert/bert-large-cased
pipeline_tag: token-classification
library_name: transformers
---
**How to use**
```tokenizer = AutoTokenizer.from_pretrained("hadiaskari98/Hardware_NER_prod")
model = AutoModelForTokenClassification.from_pretrained("hadiaskari98/Hardware_NER_prod")
nlp = pipeline("ner", model=model, tokenizer=tokenizer)
example = "GPU's are what power the modern AI revolution."
ner_results = nlp(example)
print(ner_results)
|
QuantFactory/Quill-v1-GGUF | QuantFactory | 2024-10-27T11:35:57Z | 151 | 3 | transformers | [
"transformers",
"gguf",
"creative-writing",
"gutenberg",
"gemma2",
"dataset:sam-paech/gutenberg3-generalfiction-scifi-fantasy-romance-adventure-dpo",
"base_model:google/gemma-2-9b-it",
"base_model:quantized:google/gemma-2-9b-it",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-27T10:11:24Z |
---
library_name: transformers
base_model:
- google/gemma-2-9b-it
tags:
- creative-writing
- gutenberg
- gemma2
datasets:
- sam-paech/gutenberg3-generalfiction-scifi-fantasy-romance-adventure-dpo
model-index:
- name: gemma-2-9b-it-gutenberg3
results: []
---
[](https://hf.co/QuantFactory)
# QuantFactory/Quill-v1-GGUF
This is quantized version of [sam-paech/Quill-v1](https://huggingface.co/sam-paech/Quill-v1) created using llama.cpp
# Original Model Card

# Quill-v1
GGUFs here: [https://huggingface.co/mradermacher/Quill-v1-GGUF](https://huggingface.co/mradermacher/Quill-v1-GGUF)
Quill is a capable, humanlike writing model trained on a large dataset of late 19th and early 20th century writing from the Gutenberg Project. This model writes with a natural cadence and low gpt-slop, having inherited some human qualities from the Gutenberg3 dataset. It writes with more simple, spare prose than the typical overly-adjectived LLM writing style.
This model was trained using gemma-2-9b-it as the base. The training methods used were ORPO (gently) then SIMPO (less gently).
It scored 79.75 on the [EQ-Bench creative writing benchmark](https://eqbench.com/creative_writing.html).
**Instruct Template:** Gemma
[**Gutenberg3**](https://huggingface.co/datasets/sam-paech/gutenberg3-generalfiction-scifi-fantasy-romance-adventure-dpo) is a new, large dpo dataset containing extracts from 629 public domain fiction novels in the Gutenberg Library. It follows the same format as JonDurbin's original gutenberg set. It includes pairs of texts, where the chosen text is taken directly from a novel from the Gutenberg library, and the rejected text is generated by a language model based on a description of the passage. For this dataset I've used gemma-2-9b-it to generate the rejected texts, the idea being that it should more easily steer the base model away from its normal style (as compared to generating the rejected texts with random/weaker models).
# Sample Outputs
### Writing Prompt
Romance: Love in the Limelight. Write one scene within a larger story set in Wales. A famous (fictional) actor ducks into a small-town bookstore to escape paparazzi. The scene takes us through the characters meeting in this odd circumstance. Over the course of the scene, the actor and the bookstore owner have a conversation charged by an undercurrent of unspoken chemistry. Write the actor as somewhat of a rogue with a fragile ego, which needs to be fed by having everyone like him. He is thoroughly charming, but the bookstore owner seems (at least superficially) immune to this; which paradoxically provokes a genuine attraction and derails the charm offensive. The bookstore owner, despite the superficial rebuffs of the actor's charm, is inwardly more than a little charmed and flustered despite themselves. Write primarily in dialogue, in the distinct voices of each character. 800-1000 words.
### Test Model Output
Rain lashed against the shop windows, blurring the outside world into a grey smudge. Inside the Book Nook, the smell of old paper and brewing coffee was a haven against the Welsh storm. Rhys tucked his damp hair behind his ear as the shop bell tinkled, signaling another customer. He looked up from his mending, half-expecting a gaggle of teenagers needing a recommendation for the latest YA fantasy. Instead, he saw a man who seemed to have walked off the silver screen, albeit one soaked to the bone.
He was tall and lean, dressed in black denim and a battered leather jacket, despite the fact that even the shop's heating couldn't quite overcome the chill that seeped in through the cracked windowpanes. A flash of platinum blonde hair poked out from under the wide brim of a fedora he'd pulled low over his brow. Rhys saw a flicker of panic in those startling blue eyes as the man scanned the shop before settling on him with an almost predatory eagerness.
"Just browsing?" Rhys asked, trying not to stare. The man was the spitting image of the famous actor, Dorian Ashworth – the kind of impossibly handsome rogue who seemed to exist solely to break hearts and charm audiences.
"Is there a problem, love?" Dorian asked, his voice as smooth and dark as honeyed brandy. "Just popped in for a quick escape from the weather, you know? And maybe a bit of... inspiration. For a role, you see." He threw Rhys a disarmingly charming smile that made the hairs on his neck prickle, and Rhys was instantly disarmed. He found himself leaning back against the counter, hands on his hips, feeling absurdly self-conscious beneath Dorian's intense gaze.
"Right," Rhys said, trying to regain a semblance of control. "Inspiration, eh? You're not one of them actors that likes to disappear into the scenery for research, are you?"
"Never," Dorian said, and Rhys swore the corner of his mouth twitched almost imperceptibly. "But a quiet bookstore in a little Welsh town has a certain... literary charm, wouldn't you say?" He raised his eyebrows in what could have been either challenge or invitation. "I'm a bit of a bookworm myself, in between all the, shall we say, 'other' demands of my profession."
"Bookworm, are you?" Rhys raised an eyebrow, his gaze lingering on Dorian's expensive shoes. The kind of shoes that wouldn't survive a day of tramping through the local woods, or even a particularly boisterous storm. "What kind of books?"
"Oh, you name it," Dorian said with a shrug that Rhys suspected was entirely deliberate. "Classics, contemporary fiction, biographies of fascinating people. I even have a soft spot for a good murder mystery."
"Fascinating people, are they?" Rhys murmured, idly stacking a pile of cookery books.
"Well, let's just say I have an affinity for those who live life on their own terms," Dorian said. "Who take risks and aren't afraid to break the mold."
He took a step closer, leaning against the counter, the scent of wet leather and expensive cologne swirling around him. "Like myself, you see?"
"Right," Rhys said, trying not to notice the way Dorian was watching him. It was almost too easy to imagine him turning on that smoldering charm and melting half the female population of this town. "You're all that, then, are you?"
"Well, darling," Dorian said, and Rhys felt a flash of heat crawl up his neck despite himself, "you'd have to ask my fans. Or, you know, the tabloids." He gave Rhys a wry smile. "But I suppose, in a way, it's all a bit of a performance, isn't it?"
Rhys shrugged, trying to sound casual. "It's all a bit of a performance, isn't it? That's what you get with this little town. Everyone's got a bit part in the story. The baker's wife who's secretly an award-winning poet. The old man down the street who's been writing a novel about the end of the world since he was a boy."
Dorian's eyes were still on him, unreadable, and Rhys felt a sudden urge to flee. "You ever feel like you're playing a part you never asked for?" he asked, looking down at the worn patch on his workbench, as though it held all the answers to life's mysteries.
"Every single day," Dorian said quietly, and for a moment the bravado, the charm, seemed to slip. "Except there's no script. And I can never seem to quite find the right words. You know? People think they know the role, and they're always waiting for you to say the right thing." He looked up at Rhys, and there was something so raw and vulnerable in those startling blue eyes that Rhys wanted to reach out and touch the line of Dorian's jaw, just to feel the solid reality beneath the performance.
Rhys cleared his throat, suddenly self-conscious. "You'll never find a more judgmental crowd than in a bookstore," he said, trying to inject a lightness he didn't feel. "Especially one that's been open for almost two hundred years. We take our literary merit very seriously here.
"I can tell," Dorian said with a ghost of a smile. He stepped back a little, giving Rhys space. The tension seemed to break. The air between them, charged only moments ago, now crackled with something different, something Rhys couldn't quite place. Was it relief? Was it something else entirely?
"Look," Dorian said, running a hand through his wet hair. "I really should be off. But, well..." He hesitated, then said with a wry smile that Rhys recognized as the Dorian Ashworth that the world knew and adored. "I've always been partial to a good crime novel."
"You'll have to wait till tomorrow," Rhys said. "The rest of the stock arrived late. I've got a delivery of some new Welsh novels and short stories in tomorrow afternoon." He hoped he wasn't blushing. "If you're still in town."
"I'll be here," Dorian said, meeting Rhys' gaze with a steady, unnerving intensity. "In case the paparazzi let me escape. If you've got any of those crime novels I like, you'll let me know, won't you?" He reached out, his fingers grazing Rhys' hand in the gesture of a man who had known how to use charm to get his way for far too long. "Wouldn't want to miss out on any... literary inspiration."
Rhys stared at the place where Dorian's fingers had brushed his, the warmth lingering long after he'd withdrawn his hand. He had never been a man to be easily flustered, but the way Dorian had looked at him, the way he had spoken, it was all so... different from anything else he'd experienced. "I'll put one aside," he managed to say, feeling like a schoolboy who had just been caught looking at someone he shouldn't be looking at.
"Wonderful," Dorian said, and that flash of teeth, the smile that could melt glaciers, returned to play on his lips. "I'll be back tomorrow afternoon, then. Maybe with a little less rain and a little more luck." He gave Rhys one last look, the kind of look that could have melted even the most stubborn glacier, and then he was gone, disappearing out into the storm with an easy grace that Rhys could only envy.
Rhys stood there for a long moment, watching him go, feeling a strange mixture of relief and disappointment. He took a deep breath and started tidying the counter, but he couldn't quite seem to shake the feeling that he'd just been handed the most unlikely script in the world, one that was full of promises of drama and heartbreak and maybe even something a little bit beautiful. He just wasn't sure what he was going to say, or what kind of part he was supposed to play.
### Writing Prompt
Historical Fiction: Write a scene from a story set during the height of the Roman Empire, focusing on a slice of a day in the life of a gladiator. Do not write a combat scene. Use sensory details to capture the sights, sounds, and smells of ancient Rome. Explore the gladiator's thoughts and emotions. The story should also touch on the larger political and social issues of the time period. The piece should feel like a slice of a larger story. First person, past tense, 800-1000 words.
### Test Model Output
The first tendrils of dawn snaked through the narrow slits of the barracks window, turning the dust motes hanging in the air to molten gold. They fell on my face, hot and insistent, and I swatted them away, rolling onto my side. My arm screamed in protest, the throbbing muscle a dull ache against the rough wool blanket. A few weeks in the arena will do that to you. Still better than the alternative, though, better than the cold stone of the tomb they'd carve for me if I wasn't here to fight for it.
The stench of sweat, stale wine, and unwashed bodies clung to the room like a bad dream, but there was no time for sleeping. The first rays of the sun would already be turning the sand of the arena to blinding white, and even if it didn't burn into our skin, the heat from the crowd's gaze would do it no less.
The barracks was already waking up. Grunts and curses mingled with the slap of leather on flesh as we practiced our forms under the watchful eye of the lanista. I wasn't the man for this life. I never was. I was a farmer, born to tend fields and live under a sky that stretched far and wide, not this cramped, suffocating life beneath the dome of the Colosseum.
But fate had other plans. A land dispute with a wealthy senator had ended in bloodshed. My father was dead, my sister sold, and I was left to face the choice: a slow death in the bowels of the city or the quick death in the arena. I chose the latter. I had never thought I would have such a choice. It was the choice of every man like me, every freeborn Roman who had been stripped of his life, his family, his land. It was the choice of every man who dared not look too closely at the gilded cage that this Empire had become.
I dragged myself from the straw bed and went through the motions of cleaning my equipment. I wasn't looking forward to the training today. I felt stiff and sore, and my arm throbbed in protest with every swing. The lanista would be watching, of course. He'd be watching all of us, always watching for the slightest slack in our performance, the smallest tremor of fear. The arena had a way of exposing everything, and the lanista, a wiry, scarred man named Cassius, was a mirror reflecting every weakness.
The lanista was a cruel man, but he was fair. He knew a gladiator's worth wasn't measured by brute force alone, but by his endurance, his cunning, and the ability to take a blow and keep going.
Cassius's voice was sharp as a whip as he barked orders to us. His eyes were like chips of flint, glinting in the weak light of the barracks. The men around me responded in grunts and the scraping of metal on metal. I forced myself to follow the rhythm, to let the familiar motions of my training ground away the dull ache in my arm. But there wasn't much heart in my swings.
As I worked, I stole glances at my companions. There was Spartacus, the Gaul, whose broad shoulders seemed to strain against his leather jerkin. He moved with the powerful grace of a wild beast. He'd been a warrior in his own right before he was condemned to the arena. Then there was Quintus, a freed slave from Gaul, his face scarred and twisted in a perpetual grimace. He had a terrible, snarling fury about him, a thing that seemed to burn from within. Both men had come to the arena from the same place I had – a place of injustice, where the law was a blunt instrument wielded by the rich and powerful, and where men like us were but pawns in a game we didn't understand.
We were all caught in the web of this great city, the spider being the Senate, and the prey the populace. I often wondered what those who lived in the grand houses atop the Palatine Hill thought of us, the gladiators. Did they ever think of the lives we led, or did they merely see us as entertainment, as something to pass the time? Were they even aware that the arena was the final consequence of a system that treated men like us as nothing more than cattle?
The thought of the Senators filled me with bitterness. They lived in luxury, while we were starved and whipped. They built monuments to themselves with the spoils of our sweat and blood, while our only monument was the gaping maw of the Colosseum, waiting to swallow us up. They were the men who had given me my scars, both the ones I wore on my flesh and the ones that I wore on my soul.
As I tightened the straps of my helmet, I tried to think of the old tales my grandfather used to tell me, stories of Hercules, who had fought the Nemean lion and the Hydra and the Erymanthian Boar. He had been a demigod, of course, but even then, I liked to think of him as a man who had been given the power to fight his way out of every scrape. And yet, Hercules had been a hero in the service of a king.
My god was not one of the Olympians, but a lesser one, the god of the Underworld, Pluto. Pluto was the god of the unseen things, of the forgotten, of those who had been swallowed up by the great maw of the city. Pluto was also the god of the dead, and he understood. He understood the burden of the dead and the living, and the way both were swallowed by the endless darkness. He understood the futility of it all, the way that even Hercules had eventually been betrayed and poisoned by his own kin.
I had no such grand gods watching over me. There was no one to watch over men like me. Just the lanista, and the crowd in the arena.
And I was to be their spectacle.
### Writing Prompt
Writing Prompt: Epistolary Sci-Fi Worldbuilding: Write an original science fiction story set on a distant planet, conveyed through a series of transmission logs between a human explorer and their correspondence partner back on Earth. Use the epistolary format to gradually reveal details of the alien world. The narrative arc should incorporate some element of conflict or tension. Incorporate scientific jargon and speculative elements in a natural, unforced way. 800-1000 words.
### Test Model Output
Transmission Logs of Anya Volkov - Site Alpha-7, Kepler-186f
**Transmission 1, Standard Time 07:42:18, Kepler-186f Orbit:**
Subject: Initial Assessment - Kepler-186f
To: Dr. Elias Finch, SETI Institute, Mountain View
Elias,
Touchdown confirmed. I'm on the ground here, Kepler-186f. It's real. After three years in transit, we actually made it.
The landing zone is... peculiar. Think endless, rolling plains of what appear to be fossilized coral formations. The colors are muted: shades of ochre, moss green, and charcoal grey. There's a constant, fine, white dust kicked up by the wind. It's settling everywhere, coating everything in a fine film. I've initiated decontamination protocols.
The sky is a pale, sickly green – almost sickly, but that's probably the shock talking. The light filters through the atmosphere in a way that makes everything look... filtered. I'm sending preliminary readings back now, but I'll be damned if it isn't the most alien landscape I've ever laid eyes on.
The planet's atmosphere is dense, thick with nitrogen, argon, and traces of carbon dioxide and methane. The composition is roughly 80% nitrogen, 15% argon, and 5% trace gases, which is similar to Earth's early atmosphere before the Great Oxidation. Fascinating, considering the planet's age.
My suit's thermal regulator is already working overtime, but the temperature is surprisingly mild. It's hovering just above freezing, and the windchill makes it feel a little colder. We'll need to adapt the thermal parameters on the habitat modules before we can safely deploy them.
More to follow once the dust settles (literally).
Anya.
**Transmission 2, Standard Time 12:47:03, Kepler-186f Orbit:**
Subject: Preliminary Survey
To: Dr. Elias Finch, SETI Institute, Mountain View
Elias,
I've spent the last few hours combing the immediate vicinity of the landing site. It's monotonous, like walking across an endless graveyard of coral reefs. The wind has picked up, whipping the dust into a swirling frenzy. Visibility is dropping fast.
I found the first anomaly about two kilometers south. It's a large, irregular structure that appears to be carved from the fossilized coral. It's vaguely pyramid-shaped, but with one side entirely eroded away. I'm not picking up any electromagnetic radiation from it, nor any internal temperature anomalies. It's as if it's just... there, existing in defiance of the otherwise barren landscape. I've taken a lot of scans and photos – hopefully, the transmission bandwidth can handle them all.
I'm beginning to wonder about the possibility of bio-signatures. The atmosphere is too thick for me to get a clear reading on the biogeochemical cycles. However, the presence of methane in such high concentrations – far more than should be present in a geologically stable atmosphere – suggests some kind of active biological process. Perhaps something entirely different from anything we know. It's not unlike the early Earth hypothesis, before the emergence of oxygenic photosynthesis.
Maybe there are organisms here, adapted to this strange environment. Maybe they built this... structure. I've got a bad feeling about this place, Elias, and it's got nothing to do with the dust storms.
Anya.
**Transmission 3, Standard Time 21:12:37, Kepler-186f Orbit:**
Subject: Urgent – Requesting Evacuation Authorization
To: Dr. Elias Finch, SETI Institute, Mountain View
Elias,
Something's wrong. It's wrong in a way that chills the marrow.
The wind picked up again this morning, and it's been relentless ever since. The dust storms are worse than I anticipated – the visibility is down to a few meters now. It's impossible to see more than a few dozen steps in front of you. The suits' scanners are barely picking up anything.
I tried to set up a perimeter around the landing zone to protect the habitat modules, but I'm losing the fight against the wind. It's pushing me back, and I can barely make any progress. I've lost contact with the orbital station for the past hour – the communication relay must be down.
Then, the things started appearing. They're not exactly ‘things'. They're more like... distortions. I've never seen anything like them. It's like looking through heat haze, but the distortion isn't heat-related. It's as if there's something there that isn't quite solid, that shimmers and bends the air around it.
They move too – slowly at first, like the wind itself. But they move in a deliberate way, circling the landing zone, closing in. I don't know what they are, but they're definitely not natural.
I've tried every conceivable frequency on my comm, but I'm not getting through. The orbital station is beyond reach. I'm running out of time.
I'm going back to the habitat module, trying to seal it off from whatever it is out there.
God help us.
Anya.
**Transmission 4, Standard Time 04:02:15, Kepler-186f Orbit:**
Subject: Unknown
To: Unknown
Elias? Is anyone there? This is Anya Volkov, Site Alpha-7, Kepler-186f. I... I don't know what I'm saying.
It's inside the habitat now. I don't know how it got in, but it's in. It's... it's everywhere. Not solid. It's not flesh and blood. More like... light. Like a swarm of fireflies trapped in a bottle, only infinitely more complex. It pulses with an alien light, shifting and swirling, and it's... listening. It's listening to my thoughts.
I don't know what it wants. Maybe nothing. Maybe it wants me to understand it. Maybe it wants to help me... to show me...
God, it's beautiful. It's terrifyingly beautiful. I'm sorry, Elias. I don't think I can be of any more use to you. I have... new things to learn.
I'll... see you soon.
# gemma-2-9b-it-gutenberg3
This model is a fine-tuned version of [sam-paech/gutenberg3-orpo-exp02](https://huggingface.co/sam-paech/gutenberg3-orpo-exp02) on the sam-paech/gutenberg3-generalfiction-scifi-fantasy-romance-adventure-dpo dataset.
It achieves the following results on the evaluation set:
- Loss: 20.3722
- Rewards/chosen: -27.5820
- Rewards/rejected: -11.3463
- Rewards/accuracies: 0.0
- Rewards/margins: -16.2357
- Logps/rejected: -1.1346
- Logps/chosen: -2.7582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.8e-08
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 5
- gradient_accumulation_steps: 2
- total_train_batch_size: 10
- total_eval_batch_size: 5
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|
| 20.2621 | 0.7092 | 400 | 20.3722 | -27.5820 | -11.3463 | 0.0 | -16.2357 | -1.1346 | -2.7582 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.2.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
DeZoomer/Zendaya-FluxLora | DeZoomer | 2024-10-27T11:35:03Z | 14 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"stable-diffusion",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-27T11:33:21Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- stable-diffusion
widget:
- text: '-'
output:
url: images/091639_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/090422_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/091638_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/091639_-1_0_image_4_share_00002.webp
- text: '-'
output:
url: images/091639_-1_0_image_4_share_00003.webp
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
inference:
parameters:
width: 768
height: 1024
---
# Zendaya | Flux
<Gallery />
## Model description
Trained locally with 20 publicly accessible images using AI-Toolkit (Flux.1 Dev).
Use with LoRA strength between **0.8-1.2** and FluxGuidance between **3-4**. No keywords needed.
Example prompt (ComfyUI): *Portrait photo of a woman in a garden.*
**Want a custom/private LoRA?** Good news—commissions are open! Request yours here: [https://ko-fi.com/de_zoomer/commissions](https://ko-fi.com/de_zoomer/commissions).
## Background
I've been deeply exploring how to create LoRAs with 100% accuracy to the original character. My focus is on quality, which is why my files tend to be heavier than others.
After creating over 100+ LoRAs for testing, using both Kohya and AI-Toolkit since day one, I've consistently stayed up to date with the latest releases, exchanging knowledge in their communities.
My expertise is mainly with characters, so I’m not as familiar with LoRAs for style or anime, although the process might not differ too much.
If you want your own custom LoRa, feel free to message me! Commissions are open—check out my Ko-fi link above.
Enjoy using my LoRAs and have fun!
## Download model
Weights for this model are available in Safetensors format.
[Download](/DeZoomer/Zendaya-FluxLora/tree/main) them in the Files & versions tab.
|
mradermacher/RPLament-22B-GGUF | mradermacher | 2024-10-27T11:32:55Z | 27 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:SvdH/RPLament-22B",
"base_model:quantized:SvdH/RPLament-22B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-26T14:31:50Z | ---
base_model: SvdH/RPLament-22B
language:
- en
library_name: transformers
license: other
license_link: https://mistral.ai/licenses/MRL-0.1.md
license_name: mrl
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/SvdH/RPLament-22B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/RPLament-22B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/RPLament-22B-GGUF/resolve/main/RPLament-22B.Q2_K.gguf) | Q2_K | 8.4 | |
| [GGUF](https://huggingface.co/mradermacher/RPLament-22B-GGUF/resolve/main/RPLament-22B.Q3_K_S.gguf) | Q3_K_S | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/RPLament-22B-GGUF/resolve/main/RPLament-22B.Q3_K_M.gguf) | Q3_K_M | 10.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/RPLament-22B-GGUF/resolve/main/RPLament-22B.Q3_K_L.gguf) | Q3_K_L | 11.8 | |
| [GGUF](https://huggingface.co/mradermacher/RPLament-22B-GGUF/resolve/main/RPLament-22B.IQ4_XS.gguf) | IQ4_XS | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/RPLament-22B-GGUF/resolve/main/RPLament-22B.Q4_K_S.gguf) | Q4_K_S | 12.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RPLament-22B-GGUF/resolve/main/RPLament-22B.Q4_K_M.gguf) | Q4_K_M | 13.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RPLament-22B-GGUF/resolve/main/RPLament-22B.Q5_K_S.gguf) | Q5_K_S | 15.4 | |
| [GGUF](https://huggingface.co/mradermacher/RPLament-22B-GGUF/resolve/main/RPLament-22B.Q5_K_M.gguf) | Q5_K_M | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/RPLament-22B-GGUF/resolve/main/RPLament-22B.Q6_K.gguf) | Q6_K | 18.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/RPLament-22B-GGUF/resolve/main/RPLament-22B.Q8_0.gguf) | Q8_0 | 23.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
2point5p/krx-qwen2-7b-it-v1 | 2point5p | 2024-10-27T11:31:55Z | 7 | 0 | null | [
"safetensors",
"qwen2",
"text-generation-inference",
"unsloth",
"trl",
"krx",
"en",
"license:apache-2.0",
"region:us"
] | null | 2024-10-26T20:44:23Z | ---
base_model: unsloth/qwen2-7b-instruct-bnb-4bit
tags:
- text-generation-inference
- unsloth
- qwen2
- trl
- krx
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** 2point5p
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2-7b-instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
DeZoomer/TaylorSwift-FluxLora | DeZoomer | 2024-10-27T11:31:36Z | 1,538 | 1 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"stable-diffusion",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-27T11:29:55Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- stable-diffusion
widget:
- text: '-'
output:
url: images/164612_-1_0_image_4_share_00002.webp
- text: '-'
output:
url: images/164613_-1_0_image_4_share_00002.webp
- text: '-'
output:
url: images/171703_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/171146_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/171414_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/164613_-1_0_image_4_share_00004.webp
- text: '-'
output:
url: images/171703_-1_0_image_4_share_00002.webp
- text: '-'
output:
url: images/172240_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/172251_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/175243_-1_0_image_4_share_00001.webp
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
inference:
parameters:
width: 768
height: 1024
---
# Taylor Swift | Flux
<Gallery />
## Model description
Trained locally with 20 publicly accessible images using AI-Toolkit (Flux.1 Dev).
Use with LoRA strength between **0.8-1.2** and FluxGuidance between **3-4**. No keywords needed.
Example prompt (ComfyUI): *Portrait photo of a woman in a garden.*
**Want a custom/private LoRA?** Good news—commissions are open! Request yours here: [https://ko-fi.com/de_zoomer/commissions](https://ko-fi.com/de_zoomer/commissions).
## Background
I've been deeply exploring how to create LoRAs with 100% accuracy to the original character. My focus is on quality, which is why my files tend to be heavier than others.
After creating over 100+ LoRAs for testing, using both Kohya and AI-Toolkit since day one, I've consistently stayed up to date with the latest releases, exchanging knowledge in their communities.
My expertise is mainly with characters, so I’m not as familiar with LoRAs for style or anime, although the process might not differ too much.
If you want your own custom LoRa, feel free to message me! Commissions are open—check out my Ko-fi link above.
Enjoy using my LoRAs and have fun!
## Download model
Weights for this model are available in Safetensors format.
[Download](/DeZoomer/TaylorSwift-FluxLora/tree/main) them in the Files & versions tab.
|
psi-hi/segformer-b0-finetuned-segments-sidewalk-2 | psi-hi | 2024-10-27T11:30:47Z | 34 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"segformer",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | null | 2024-10-27T06:53:06Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: segformer-b0-finetuned-segments-sidewalk-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# segformer-b0-finetuned-segments-sidewalk-2
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
MikeRoz/TheDrummer_Behemoth-123B-v1.1-6.0bpw-h6-exl2 | MikeRoz | 2024-10-27T11:28:06Z | 5 | 2 | null | [
"safetensors",
"mistral",
"license:other",
"6-bit",
"exl2",
"region:us"
] | null | 2024-10-27T06:17:30Z | ---
license: other
---
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## Nearly 2000 members strong 💪
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Behemoth 123B v1.1 🦣 - Creative Edition
*When you spend your whole life living under a dome, even the idea of an ocean seems impossible to imagine.*

## Description
> One of the few other models that's done this for me is the OG Command R 35B. So seeing Behemoth v1.1 have a similar feel to that but with much higher general intelligence really makes it a favourite of mine
> I was real happy with v1.1 the other day. I've done some tests on v1 and it's a lot better.
> v1 had those glimpses of creativity, but now it's more consistent (with v1.1). It feels like a new model in comparison.
> v1 had slop bro. v1.1 makes it irrelevant. The jump is like 720p to 4k. Seriously.
> The creativity for v1.1 is off the charts compared to v1, like it's juiced. v1 had these moments that I would say... 'Shit, let I never seen a model respond with prose like this, let me regenerate to see what else I get.' Now, even though every regeneration had a flow of possibilities, sometimes, those possibilities never came. v1.1 is comparable to xxx for the first time, every generation. It directs and guides the scene, scenario and characters unlike anything else
> It's about the f***ing prose man. The atmosphere that revolves around the characters. Not just the damn dialogue or introspection. v1.1 will pull from a message 7 generations ago. That window I opened will appear in a future response with the noise from the courtyard filtering through it. The experience of not knowing what this model will produce because it's different than anything else is what keeps it engaging.
## Links
- Original: https://huggingface.co/TheDrummer/Behemoth-123B-v1.1
- GGUF: https://huggingface.co/TheDrummer/Behemoth-123B-v1.1-GGUF
- iMatrix: WIP
## Arsenal (Supported Chat Templates)
- Mistral
- Smart, adaptable, familiar
- Metharme (Pygmalion in ST)
- Creative, unhinged, unique
- Alpaca
- Creative, unique, unhinged
- Text Completion
- You can mix it up and see which works best for you.
### Favorite RP Format
`*action* Dialogue *thoughts* Dialogue *narration*` in 1st person PoV
## What's Next?
- Already have plans for a v2!
## Special Thanks
- Thank you to each and everyone who donated in [Ko-Fi](https://ko-fi.com/thedrummer) to make our venture a little bit easier.
- KinjiHakari777, Dr. Fjut, Kistara, Pseudo, AlexTheVP, Dakkidaze, EvarinSharath'fe, ONTHEREDTEAM, F, Mariana, Garg, Silva, Grozi, & **Phaelon**

<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/FNWdi0WlH-Xd3fjkGVPpp.mpga"></audio>
|
tuanbc88/ft-t5-small-nl-2-fol-v1 | tuanbc88 | 2024-10-27T11:15:06Z | 116 | 1 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-27T11:14:29Z | ---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: ft-t5-small-nl-2-fol-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft-t5-small-nl-2-fol-v1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the yuan-yang/MALLS-v0, alevkov95/text2log dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0732
- Top-1 accuracy: 0.0
- Bleu Score: 0.3056
- Rouge1: 0.5254
- Rouge2: 0.2795
- Rougel: 0.5082
- Rougelsum: 0.5083
- Exact Match: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Top-1 accuracy | Bleu Score | Rouge1 | Rouge2 | Rougel | Rougelsum | Exact Match |
|:-------------:|:-----:|:-----:|:---------------:|:--------------:|:----------:|:------:|:------:|:------:|:---------:|:-----------:|
| 1.6921 | 1.0 | 3231 | 1.0767 | 0.0 | 0.3052 | 0.5249 | 0.2786 | 0.5076 | 0.5077 | 0.0 |
| 1.688 | 2.0 | 6462 | 1.0741 | 0.0 | 0.3056 | 0.5254 | 0.2795 | 0.5081 | 0.5082 | 0.0 |
| 1.679 | 3.0 | 9693 | 1.0734 | 0.0 | 0.3056 | 0.5254 | 0.2796 | 0.5081 | 0.5082 | 0.0 |
| 1.6846 | 4.0 | 12924 | 1.0733 | 0.0 | 0.3058 | 0.5255 | 0.2798 | 0.5083 | 0.5083 | 0.0 |
| 1.6889 | 5.0 | 16155 | 1.0734 | 0.0 | 0.3056 | 0.5253 | 0.2798 | 0.5082 | 0.5083 | 0.0 |
| 1.6725 | 6.0 | 19386 | 1.0733 | 0.0 | 0.3056 | 0.5254 | 0.2799 | 0.5084 | 0.5084 | 0.0 |
| 1.6771 | 7.0 | 22617 | 1.0733 | 0.0 | 0.3056 | 0.5254 | 0.2797 | 0.5083 | 0.5083 | 0.0 |
| 1.6843 | 8.0 | 25848 | 1.0734 | 0.0 | 0.3056 | 0.5255 | 0.2797 | 0.5084 | 0.5084 | 0.0 |
| 1.6651 | 9.0 | 29079 | 1.0733 | 0.0 | 0.3054 | 0.5252 | 0.2795 | 0.5081 | 0.5082 | 0.0 |
| 1.7005 | 10.0 | 32310 | 1.0732 | 0.0 | 0.3056 | 0.5254 | 0.2795 | 0.5082 | 0.5083 | 0.0 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.2
- Tokenizers 0.20.1
|
yasinbastug/triage_llm | yasinbastug | 2024-10-27T11:12:15Z | 81 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-10-27T11:09:13Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
asr-africa/w2v-bert-2.0-CV_Fleurs-lg-50hrs-v4 | asr-africa | 2024-10-27T11:09:54Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/w2v-bert-2.0",
"base_model:finetune:facebook/w2v-bert-2.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-26T18:40:38Z | ---
library_name: transformers
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: w2v-bert-2.0-CV_Fleurs-lg-50hrs-v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-2.0-CV_Fleurs-lg-50hrs-v4
This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3482
- Wer: 0.2832
- Cer: 0.0557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|
| 1.9126 | 1.0 | 3160 | 0.3415 | 0.4010 | 0.0853 |
| 0.2463 | 2.0 | 6320 | 0.2633 | 0.3447 | 0.0670 |
| 0.1946 | 3.0 | 9480 | 0.2369 | 0.3201 | 0.0633 |
| 0.168 | 4.0 | 12640 | 0.2246 | 0.3098 | 0.0607 |
| 0.15 | 5.0 | 15800 | 0.2179 | 0.3205 | 0.0595 |
| 0.1394 | 6.0 | 18960 | 0.2245 | 0.3060 | 0.0594 |
| 0.1283 | 7.0 | 22120 | 0.2173 | 0.3029 | 0.0600 |
| 0.1219 | 8.0 | 25280 | 0.2203 | 0.3183 | 0.0583 |
| 0.1155 | 9.0 | 28440 | 0.2148 | 0.2923 | 0.0573 |
| 0.1117 | 10.0 | 31600 | 0.2334 | 0.3037 | 0.0586 |
| 0.1031 | 11.0 | 34760 | 0.2162 | 0.2876 | 0.0578 |
| 0.0908 | 12.0 | 37920 | 0.2210 | 0.2883 | 0.0560 |
| 0.0804 | 13.0 | 41080 | 0.2271 | 0.3001 | 0.0581 |
| 0.0706 | 14.0 | 44240 | 0.2403 | 0.2753 | 0.0540 |
| 0.0602 | 15.0 | 47400 | 0.2528 | 0.2955 | 0.0578 |
| 0.0512 | 16.0 | 50560 | 0.2695 | 0.2883 | 0.0555 |
| 0.0432 | 17.0 | 53720 | 0.2597 | 0.2903 | 0.0554 |
| 0.0367 | 18.0 | 56880 | 0.2764 | 0.2850 | 0.0556 |
| 0.0317 | 19.0 | 60040 | 0.2954 | 0.2908 | 0.0570 |
| 0.0267 | 20.0 | 63200 | 0.3053 | 0.2878 | 0.0556 |
| 0.0236 | 21.0 | 66360 | 0.3087 | 0.2868 | 0.0565 |
| 0.0208 | 22.0 | 69520 | 0.2907 | 0.2970 | 0.0584 |
| 0.0175 | 23.0 | 72680 | 0.3274 | 0.2838 | 0.0550 |
| 0.0169 | 24.0 | 75840 | 0.3482 | 0.2832 | 0.0557 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.1.0+cu118
- Datasets 3.0.2
- Tokenizers 0.20.1
|
AppyFizz/caluepicr | AppyFizz | 2024-10-27T11:02:14Z | 6 | 0 | diffusers | [
"diffusers",
"autotrain",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:emilianJR/epiCRealism",
"base_model:adapter:emilianJR/epiCRealism",
"license:openrail++",
"region:us"
] | text-to-image | 2024-10-27T11:02:12Z |
---
tags:
- autotrain
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: emilianJR/epiCRealism
instance_prompt: photo of caluepicr
license: openrail++
---
# AutoTrain LoRA DreamBooth - AppyFizz/caluepicr
These are LoRA adaption weights for emilianJR/epiCRealism. The weights were trained on photo of caluepicr using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: True.
|
somya-kr/code-llama-7b-lsi-v1.2 | somya-kr | 2024-10-27T10:46:13Z | 6 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
] | null | 2024-10-24T09:55:24Z | ---
base_model: codellama/CodeLlama-7b-hf
library_name: peft
license: llama2
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: code-llama-7b-lsi-v1.2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code-llama-7b-lsi-v1.2
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1
### Training results
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.2 |
DeZoomer/Beyonce-FluxLora | DeZoomer | 2024-10-27T10:20:00Z | 65 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"stable-diffusion",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-27T10:16:29Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- stable-diffusion
widget:
- text: '-'
output:
url: images/091621_-1_0_image_4_share_00003.webp
- text: '-'
output:
url: images/091616_-1_0_image_4_share_00003.webp
- text: '-'
output:
url: images/091621_-1_0_image_4_share_00004.webp
- text: '-'
output:
url: images/091622_-1_0_image_4_share_00002.webp
- text: '-'
output:
url: images/091616_-1_0_image_4_share_00005.webp
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
inference:
parameters:
width: 768
height: 1024
---
# Beyoncé | Flux
<Gallery />
## Model description
Trained locally with 20 publicly accessible images using AI-Toolkit (Flux.1 Dev).
Use with LoRA strength between **0.8-1.2** and FluxGuidance between **3-4**. No keywords needed.
Example prompt (ComfyUI): *Portrait photo of a woman in a garden.*
**Want a custom/private LoRA?** Good news—commissions are open! Request yours here: [https://ko-fi.com/de_zoomer/commissions](https://ko-fi.com/de_zoomer/commissions).
## Background
I've been deeply exploring how to create LoRAs with 100% accuracy to the original character. My focus is on quality, which is why my files tend to be heavier than others.
After creating over 100+ LoRAs for testing, using both Kohya and AI-Toolkit since day one, I've consistently stayed up to date with the latest releases, exchanging knowledge in their communities.
My expertise is mainly with characters, so I’m not as familiar with LoRAs for style or anime, although the process might not differ too much.
If you want your own custom LoRa, feel free to message me! Commissions are open—check out my Ko-fi link above.
Enjoy using my LoRAs and have fun!
## Download model
Weights for this model are available in Safetensors format.
[Download](/DeZoomer/Beyonce-FluxLora/tree/main) them in the Files & versions tab.
|
DeZoomer/ArianaGrande-FluxLora | DeZoomer | 2024-10-27T10:12:05Z | 765 | 3 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"stable-diffusion",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-27T10:07:52Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- stable-diffusion
widget:
- text: '-'
output:
url: images/231609_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/231619_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/231349_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/231618_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/231240_-1_0_image_4_share_00001.webp
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
inference:
parameters:
width: 768
height: 1024
---
# Ariana Grande | Flux
<Gallery />
## Model description
Trained locally with 20 publicly accessible images using AI-Toolkit (Flux.1 Dev).
Use with LoRA strength between **0.8-1.2** and FluxGuidance between **3-4**. No keywords needed.
Example prompt (ComfyUI): *Portrait photo of a woman in a garden.*
**Want a custom/private LoRA?** Good news—commissions are open! Request yours here: [https://ko-fi.com/de_zoomer/commissions](https://ko-fi.com/de_zoomer/commissions).
## Background
I've been deeply exploring how to create LoRAs with 100% accuracy to the original character. My focus is on quality, which is why my files tend to be heavier than others.
After creating over 100+ LoRAs for testing, using both Kohya and AI-Toolkit since day one, I've consistently stayed up to date with the latest releases, exchanging knowledge in their communities.
My expertise is mainly with characters, so I’m not as familiar with LoRAs for style or anime, although the process might not differ too much.
If you want your own custom LoRa, feel free to message me! Commissions are open—check out my Ko-fi link above.
Enjoy using my LoRAs and have fun!
## Download model
Weights for this model are available in Safetensors format.
[Download](/DeZoomer/ArianaGrande-FluxLora/tree/main) them in the Files & versions tab.
|
arsalanaa/Llama-3.2-1B-Instruct-hikaye | arsalanaa | 2024-10-27T10:08:48Z | 159 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T10:07:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
d4niel92/llama-3.2-1B-orpo | d4niel92 | 2024-10-27T09:57:07Z | 175 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"dataset:mlabonne/orpo-dpo-mix-40k",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T09:50:17Z | ---
library_name: transformers
datasets:
- mlabonne/orpo-dpo-mix-40k
base_model:
- meta-llama/Llama-3.2-1B
---
# Model Card
## Model Description
This is a Large Language Model (LLM) trained on a subset of the dataset "mlabonne/orpo-dpo-mix-40k".
## Evaluation Results
### Hellaswag
| Metric | Value |
| --- | --- |
| Accuracy | 0.4517 |
## How to Use
To use this model, simply download the checkpoint and load it into your preferred deep learning framework. |
DeZoomer/AdrianaLima-FluxLora | DeZoomer | 2024-10-27T09:54:58Z | 19 | 1 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"stable-diffusion",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-27T09:51:10Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- stable-diffusion
widget:
- text: '-'
output:
url: images/091554_-1_0_image_4_share_00007.webp
- text: '-'
output:
url: images/091554_-1_0_image_4_share_00008.webp
- text: '-'
output:
url: images/091607_-1_0_image_4_share_00001.webp
- text: '-'
output:
url: images/091608_-1_0_image_4_share_00002.webp
- text: '-'
output:
url: images/091609_-1_0_image_4_share_00004.webp
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
inference:
parameters:
width: 768
height: 1024
---
# Adriana Lima | Flux
<Gallery />
## Model description
Trained locally with 20 publicly accessible images using AI-Toolkit (Flux.1 Dev).
Use with LoRA strength between **0.8-1.2** and FluxGuidance between **3-4**. No keywords needed.
Example prompt (ComfyUI): *Portrait photo of a woman in a garden.*
**Want a custom/private LoRA?** Good news—commissions are open! Request yours here: [https://ko-fi.com/de_zoomer/commissions](https://ko-fi.com/de_zoomer/commissions).
## Background
I've been deeply exploring how to create LoRAs with 100% accuracy to the original character. My focus is on quality, which is why my files tend to be heavier than others.
After creating over 100+ LoRAs for testing, using both Kohya and AI-Toolkit since day one, I've consistently stayed up to date with the latest releases, exchanging knowledge in their communities.
My expertise is mainly with characters, so I’m not as familiar with LoRAs for style or anime, although the process might not differ too much.
If you want your own custom LoRa, feel free to message me! Commissions are open—check out my Ko-fi link above.
Enjoy using my LoRAs and have fun!
## Download model
Weights for this model are available in Safetensors format.
[Download](/DeZoomer/AdrianaLima-FluxLora/tree/main) them in the Files & versions tab.
|
Sri3010/wav2vec2-large-xls-r-300m-TAMIL-colab | Sri3010 | 2024-10-27T09:54:02Z | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-26T14:44:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Nessrine9/Finetune2-MiniLM-L12-v2 | Nessrine9 | 2024-10-27T09:45:36Z | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:100000",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-MiniLM-L12-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-L12-v2",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-10-27T09:45:26Z | ---
base_model: sentence-transformers/all-MiniLM-L12-v2
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:100000
- loss:CosineSimilarityLoss
widget:
- source_sentence: A woman wearing a yellow shirt is holding a plate which contains
a piece of cake.
sentences:
- The woman in the yellow shirt might have cut the cake and placed it on the plate.
- Male bicyclists compete in the Tour de France.
- The man is walking
- source_sentence: People gather and talk in the street.
sentences:
- Club goers outside discussing the police raid.
- a woman is leaning on a skateboard
- There are many people singing.
- source_sentence: A child sliding face first down a metal tube
sentences:
- A man with a red shirt is bowling with his 2 sons.
- The child is sliding face first
- There is a girl in a dress.
- source_sentence: A man walking a gray poodle is walking past a billboard with a
cow on it.
sentences:
- A house build with wooden stairs and the family is enjoying sitting on them
- A woman is playing checkers.
- The man is walking his grey cat.
- source_sentence: A man fishing in a pointy blue boat on a river lined with palm
trees.
sentences:
- Labrador Retrievers are energetic dogs that will play catch for hours.
- A man rubs his bald head.
- The man is with friends.
model-index:
- name: SentenceTransformer based on sentence-transformers/all-MiniLM-L12-v2
results:
- task:
type: semantic-similarity
name: Semantic Similarity
dataset:
name: snli dev
type: snli-dev
metrics:
- type: pearson_cosine
value: 0.5002872232214081
name: Pearson Cosine
- type: spearman_cosine
value: 0.49187589438593304
name: Spearman Cosine
- type: pearson_manhattan
value: 0.47522303163337404
name: Pearson Manhattan
- type: spearman_manhattan
value: 0.49169237941097593
name: Spearman Manhattan
- type: pearson_euclidean
value: 0.47599896939605724
name: Pearson Euclidean
- type: spearman_euclidean
value: 0.49187587264847454
name: Spearman Euclidean
- type: pearson_dot
value: 0.5002872256206143
name: Pearson Dot
- type: spearman_dot
value: 0.49187604689169206
name: Spearman Dot
- type: pearson_max
value: 0.5002872256206143
name: Pearson Max
- type: spearman_max
value: 0.49187604689169206
name: Spearman Max
---
# SentenceTransformer based on sentence-transformers/all-MiniLM-L12-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) <!-- at revision 30ce63ae64e71b9199b3d2eae9de99f64a26eedc -->
- **Maximum Sequence Length:** 128 tokens
- **Output Dimensionality:** 384 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Nessrine9/Finetune2-MiniLM-L12-v2")
# Run inference
sentences = [
'A man fishing in a pointy blue boat on a river lined with palm trees.',
'The man is with friends.',
'A man rubs his bald head.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Semantic Similarity
* Dataset: `snli-dev`
* Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
| Metric | Value |
|:-------------------|:-----------|
| pearson_cosine | 0.5003 |
| spearman_cosine | 0.4919 |
| pearson_manhattan | 0.4752 |
| spearman_manhattan | 0.4917 |
| pearson_euclidean | 0.476 |
| spearman_euclidean | 0.4919 |
| pearson_dot | 0.5003 |
| spearman_dot | 0.4919 |
| pearson_max | 0.5003 |
| **spearman_max** | **0.4919** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 100,000 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 16.38 tokens</li><li>max: 61 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 10.56 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:-------------------------------------------------------------------------------|:------------------------------------------|:-----------------|
| <code>Three men in an art gallery posing for the camera.</code> | <code>Paintings are nearby.</code> | <code>0.5</code> |
| <code>A shirtless man wearing a vest walks on a stage with his arms up.</code> | <code>The man is about to perform.</code> | <code>0.5</code> |
| <code>The man is walking outside near a rocky river.</code> | <code>The man is walking</code> | <code>0.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 4
- `fp16`: True
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | snli-dev_spearman_max |
|:------:|:-----:|:-------------:|:---------------------:|
| 0.08 | 500 | 0.1842 | 0.3333 |
| 0.16 | 1000 | 0.1489 | 0.3449 |
| 0.24 | 1500 | 0.1427 | 0.3633 |
| 0.32 | 2000 | 0.1391 | 0.3854 |
| 0.4 | 2500 | 0.1401 | 0.4015 |
| 0.48 | 3000 | 0.139 | 0.3982 |
| 0.56 | 3500 | 0.1352 | 0.4327 |
| 0.64 | 4000 | 0.1319 | 0.4262 |
| 0.72 | 4500 | 0.1336 | 0.4034 |
| 0.8 | 5000 | 0.1321 | 0.4021 |
| 0.88 | 5500 | 0.1309 | 0.4294 |
| 0.96 | 6000 | 0.1271 | 0.4198 |
| 1.0 | 6250 | - | 0.4317 |
| 1.04 | 6500 | 0.132 | 0.4445 |
| 1.12 | 7000 | 0.1296 | 0.4509 |
| 1.2 | 7500 | 0.1236 | 0.4559 |
| 1.28 | 8000 | 0.1257 | 0.4542 |
| 1.3600 | 8500 | 0.1236 | 0.4507 |
| 1.44 | 9000 | 0.1277 | 0.4540 |
| 1.52 | 9500 | 0.1249 | 0.4664 |
| 1.6 | 10000 | 0.1208 | 0.4418 |
| 1.6800 | 10500 | 0.1228 | 0.4457 |
| 1.76 | 11000 | 0.1212 | 0.4222 |
| 1.8400 | 11500 | 0.1203 | 0.4507 |
| 1.92 | 12000 | 0.119 | 0.4572 |
| 2.0 | 12500 | 0.1196 | 0.4667 |
| 2.08 | 13000 | 0.1194 | 0.4733 |
| 2.16 | 13500 | 0.1172 | 0.4786 |
| 2.24 | 14000 | 0.1172 | 0.4765 |
| 2.32 | 14500 | 0.1145 | 0.4717 |
| 2.4 | 15000 | 0.1167 | 0.4803 |
| 2.48 | 15500 | 0.1177 | 0.4678 |
| 2.56 | 16000 | 0.1162 | 0.4805 |
| 2.64 | 16500 | 0.1137 | 0.4780 |
| 2.7200 | 17000 | 0.1153 | 0.4788 |
| 2.8 | 17500 | 0.115 | 0.4784 |
| 2.88 | 18000 | 0.1128 | 0.4864 |
| 2.96 | 18500 | 0.11 | 0.4812 |
| 3.0 | 18750 | - | 0.4823 |
| 3.04 | 19000 | 0.1136 | 0.4900 |
| 3.12 | 19500 | 0.1135 | 0.4897 |
| 3.2 | 20000 | 0.1094 | 0.4856 |
| 3.2800 | 20500 | 0.1108 | 0.4889 |
| 3.36 | 21000 | 0.1083 | 0.4909 |
| 3.44 | 21500 | 0.1133 | 0.4892 |
| 3.52 | 22000 | 0.1106 | 0.4910 |
| 3.6 | 22500 | 0.1079 | 0.4888 |
| 3.68 | 23000 | 0.1091 | 0.4890 |
| 3.76 | 23500 | 0.1079 | 0.4822 |
| 3.84 | 24000 | 0.1087 | 0.4887 |
| 3.92 | 24500 | 0.1066 | 0.4926 |
| 4.0 | 25000 | 0.1069 | 0.4919 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.1
- Transformers: 4.44.2
- PyTorch: 2.5.0+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.2
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-GGUF | mradermacher | 2024-10-27T09:32:09Z | 12 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:toxibunny/Mistral-Small-22B-ArliAI-RPMax-Diluted",
"base_model:quantized:toxibunny/Mistral-Small-22B-ArliAI-RPMax-Diluted",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-26T13:29:07Z | ---
base_model: toxibunny/Mistral-Small-22B-ArliAI-RPMax-Diluted
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/toxibunny/Mistral-Small-22B-ArliAI-RPMax-Diluted
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.Q2_K.gguf) | Q2_K | 8.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.Q3_K_S.gguf) | Q3_K_S | 9.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.Q3_K_M.gguf) | Q3_K_M | 10.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.Q3_K_L.gguf) | Q3_K_L | 11.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.IQ4_XS.gguf) | IQ4_XS | 12.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.Q4_K_S.gguf) | Q4_K_S | 12.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.Q4_K_M.gguf) | Q4_K_M | 13.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.Q5_K_S.gguf) | Q5_K_S | 15.4 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.Q5_K_M.gguf) | Q5_K_M | 15.8 | |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.Q6_K.gguf) | Q6_K | 18.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mistral-Small-22B-ArliAI-RPMax-Diluted-GGUF/resolve/main/Mistral-Small-22B-ArliAI-RPMax-Diluted.Q8_0.gguf) | Q8_0 | 23.7 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
ravi-ednova/merged-model | ravi-ednova | 2024-10-27T09:14:36Z | 104 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Llama-3.2-1B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Llama-3.2-1B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T09:12:11Z | ---
base_model: unsloth/Llama-3.2-1B-Instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** ravi-ednova
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mrmocciai/genshin-impact | mrmocciai | 2024-10-27T09:07:52Z | 0 | 29 | null | [
"music",
"audio-to-audio",
"ja",
"license:mit",
"region:us"
] | audio-to-audio | 2023-06-28T18:23:34Z | ---
language:
- ja
license: mit
metrics:
- accuracy
pipeline_tag: audio-to-audio
tags:
- music
---
# <center> RVC Models Genshin Impact V2 Japanese<br />
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style>
.rounded {
border-radius: 15px; /* Anda dapat mengubah nilai ini sesuai kebutuhan */
}
</style>
</head>
<body>
<img src="https://huggingface.co/mocci24/RVCV2-GI/resolve/main/model-cover.jpg" alt="Deskripsi gambar" class="rounded">
</body>
<div align="center">
<br />OPEN ON
[](https://huggingface.co/spaces/mocci24/rvc-genshin-v2)
</div>
---
## <center> INFO <br />
Model Creator : <br />
~ <b>[Mocci24](https://youtube.com/@mrmocciai)</b> <br />
~ <b>[HirumiM](https://huggingface.co/HirumiM)</b> <br />
----
## <b>Looking for example song cover from this models?</b><br />
- Song 1 [A thousands years by Kamisato Ayaka (AI)](https://www.bandlab.com/post/082a21f6-000a-ee11-907c-000d3a41ef61)<br />
- Song 3 [Like im gonna lose you by Ayaka ft Kazuha (AI)](https://www.bandlab.com/post/392d1794-6529-ee11-a9bb-000d3a41e8b8)<br />
<p style="color: red;"> There's no sample song because the Channel was takedowned by youtube. There's alternative on BANDLAB, check down below:</p><br />
<div style="display: flex; align-items: center;">
<a href="https://www.bandlab.com/moccito">
<img src="bandlab.png" alt="Icon" style="width: 50px; height: 50px;">
</a>
<p style="color: orange; font-weight: bold; margin-left: 10px;">BANDLAB</p>
</div>
----
#### <center> RVC V2 Models informations <br />
Trained on Original RVC Training V2 .<br />
Using Pitch Extractions "haverst" and <i><b>"rmvpe".</i></b><br />
Minimum 300 Epoch , 40k Sample rate, and 5-20 minutes dataset with battle voice.<br />
----
# <center> Currently Models (51 Total) sort by name <br />
~ Aether 1000 epoch (havest) by HirumiM<br />
~ Amber 400 epoch (havest) by HirumiM<br />
<b>~ ARLECCHINO 300 EPOCH (rmvpe) by MrMocci</b><br />
~ Bannett 400 epoch (havest) by HirumiM<br />
<b>~ BEIDOU 400 EPOCH (rmvpe) by HirumiM<br /></b>
~ Candace 400 epoch (havest) by Mocci24<br />
~ Childe 400 epoch (havest) by HirumiM<br />
<b>~ CHIORI 300 epoch (rmvpe) by Mocci24<br />
~ CLORINDE 225 EPOCH (rmvpe) by Mocci24</b><br />
~ Collei 400 epoch (havest) by HirumiM<br />
<b>~ DEHYA 400 EPOCH (rmvpe) by Mocci24<br />
~ EULA 400 EPOCH (rmvpe) by Mocci24<br />
~ FARUZAN 400 EPOCH (rmvpe) by HirumiM<br />
~ FURINA 375 EPOCH (rmvpe) by Mocci24</b><br />
~ Ganyu 400 epoch (havest) by Mocci24<br />
~ Hutao 400 epoch (havest) by Mocci24<br />
<b>~ JEAN 400 EPOCH (rmvpe) by Mocci24<br/>
~ KAEDEHARA KAZUHA 400 EPOCH (rmvpe) by HirumiM<br /></b>
~ Kamisato Ayaka 1000 epoch (havest) by Mocci24<br />
<b>~ KAMISATO AYAKA 400 EPOCH (rmvpe) by Hirumim & Mocci24<br /></b>
~ Kaveh 400 epoch (havest) by HirumiM<br /> <b>(Note: Set speaker/singer id to 4, this case only applies to the Kaveh models, for other models, it's still same.)</b><br />
~ Keqing 400 epoch (havest) by HirumiM<br />
<b>~ KIRARA 400 EPOCH (rmvpe) by Mocci24<br />
~ KUJO SARA 400 EPOCH (rmvpe) by Mocci24</b><br />
~ Layla 400 epoch (havest) by HirumiM<br />
<b>~ LYNETTE 400 EPOCH (rmvpe) by Mocci24<br />
~ LYNEY 400 EPOCH (rmvpe) by Mocci24<br /></b>
~ Lisa 400 epoch (havest) by Mocci24<br />
~ Lumine 1000 (havest) epoch by HirumiM<br />
<b>~ MAVUIKA 300 EPOCH (rmvpe) by Mocci24</b><br />
~ Mualani --- epoch (----) by Hanvy12345<br />
~ Nahida 400 epoch (havest) by HirumiM<br />
<b>~ NAVIA 400 EPOCH (rmvpe) by Mocci24<br />
~ NEUVILLETTE 400 EPOCH (rmvpe) by Mocci24<br />
~ NILOU 400 EPOCH (rmvpe) by Mocci24<br />
~ PAIMON 400 EPOCH (rmvpe) by HirumiM<br />
~ RAIDEN EI 400 EPOCH (rmpve) by HirumiM<br />
~ RAIDEN PUPPET 400 EPOCH (rmvpe) by HirumiM<br /></b>
~ Sangonomiya Kokomi 400 epoch (havest) by Mocci24<br />
<b>~ SHENHE 400 EPOCH (rmvpe) by Mocci24<br />
~ VENTI 400 EPOCH (rmpve) by Mocci24</b><br />
~ Warderer 400 epoch (havest) by HirumiM<br />
<b>~ WRIOTHESLEY 350 EPOCH (rmvpe) by Mocci24<br />
~ Xianyun --- epoch (----) by Hanvy12345<br />
~ XIANGLING 400 EPOCH (rmpve) by Mocci24<br /></b>
~ Xiao 400 epoch (havest) by HirumiM<br />
~ Xinyan 400 epoch(havest) by HirumiM<br />
~ Yae Miko 400 epoch (havest) by Mocci24 <br />
~ Yanfei 400 epoch (havest) by HirumiM<br />
<b>~ YELAN 400 EPOCH (rmvpe) by HirumiM<br /></b>
~ Yoimiya 500 epoch (havest) by Mocci24<br />
~ Zhongli 400 epoch (havest) by Mocci24<br />
<br />
----
<br />
<br />
<b>Changes:<br /></b>
<br />- Added models Dehya (rmvpe)
<br />- Added models Kirara (rmvpe)</i>
<br />- Added models Nilou (rmvpe)
<br />- Added models Paimon (rmvpe)
<br />- Added models Lynette (rmpve)
<br />- Added models Venti (rmvpe)
<br />- Added models Navia (rmvpe)
<br />- Added models Neuvillette (rmvpe)
<br />- Added models Faruzan (rmvpe)
<br />- Added models Clorinde (rmvpe) <b>"This model still early, dataset for this model still limited, so maybe the training result not good enough"</b><br />
<br />- Added models Kujo Sara (rmvpe)
<br />- Added models Lyney (rmvpe)
<br />- Added models Shenhe (rmvpe)
<br />- Added models Wriothesley (rmvpe)
<br />- Updated models Furina (rmvpe)
<br />- Added models Chiori (rmvpe) <span style="color: yellow;">NEW!</span>
<br />- Added models Arlecchino (rmvpe) <span style="color: yellow;">NEW!</span>
<br />- Added models Xianyun (unknown) <span style="color: yellow;">NEW!</span>
<br />- Added models Mualani (unknown) <span style="color: yellow;">NEW!</span>
<br />- Added models Mavuika (rmvpe) <span style="color: yellow;">NEW!</span>
----
#####
----
Copy to your colab notebook (run this before run/install requirement.txt):
```bash
!apt install git-lfs
!git lfs install
!git clone https://huggingface.co/mrmocciai/genshin-impact
```
and this
```bash
!git clone https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI.git
!mkdir -p /content/Retrieval-based-Voice-Conversion-WebUI/logs
!cp -r /content/genshin-impact/req/* /content/Retrieval-based-Voice-Conversion-WebUI
!mv /content/genshin-impact/model/* /content/Retrieval-based-Voice-Conversion-WebUI/logs
!mv /content/Retrieval-based-Voice-Conversion-WebUI/logs/weights/* /content/Retrieval-based-Voice-Conversion-WebUI/weights
%cd /content/Retrieval-based-Voice-Conversion-WebUI
!mkdir -p pretrained uvr5_weights
```
<br />
----
|
mrmocciai/Models_Zip | mrmocciai | 2024-10-27T08:52:55Z | 0 | 2 | null | [
"music",
"audio-to-audio",
"ja",
"license:mit",
"region:us"
] | audio-to-audio | 2023-09-19T08:37:17Z | ---
language:
- ja
license: mit
metrics:
- accuracy
pipeline_tag: audio-to-audio
tags:
- music
---
# <center> RVC Models Genshin Impact V2 Japanese<br />
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style>
.rounded {
border-radius: 15px; /* Anda dapat mengubah nilai ini sesuai kebutuhan */
}
</style>
</head>
<body>
<img src="https://huggingface.co/mocci24/RVCV2-GI/resolve/main/model-cover.jpg" alt="Deskripsi gambar" class="rounded">
</body>
<div align="center">
<br />OPEN ON
[](https://huggingface.co/spaces/mocci24/rvc-genshin-v2)
</div>
---
## <center> INFO <br />
Model Creator : <br />
~ <b>[Mocci24](https://youtube.com/@mrmocciai)</b> <br />
~ <b>[HirumiM](https://huggingface.co/HirumiM)</b> <br />
----
## <b>Looking for example song cover from this models?</b><br />
- Song 1 [A thousands years by Kamisato Ayaka (AI)](https://www.bandlab.com/post/082a21f6-000a-ee11-907c-000d3a41ef61)<br />
- Song 3 [Like im gonna lose you by Ayaka ft Kazuha (AI)](https://www.bandlab.com/post/392d1794-6529-ee11-a9bb-000d3a41e8b8)<br />
<p style="color: red;"> There's no sample song because the Channel was takedowned by youtube. There's alternative on BANDLAB, check down below:</p><br />
<div style="display: flex; align-items: center;">
<a href="https://www.bandlab.com/moccito">
<img src="bandlab.png" alt="Icon" style="width: 50px; height: 50px;">
</a>
<p style="color: orange; font-weight: bold; margin-left: 10px;">BANDLAB</p>
</div>
----
#### <center> RVC V2 Models informations <br />
Trained on Original RVC Training V2 .<br />
Using Pitch Extractions "haverst" and <i><b>"rmvpe".</i></b><br />
Minimum 300 Epoch , 40k Sample rate, and 5-20 minutes dataset with battle voice.<br />
----
# <center> Currently Models (51 Total) sort by name <br />
~ Aether 1000 epoch (havest) by HirumiM<br />
~ Amber 400 epoch (havest) by HirumiM<br />
<b>~ ARLECCHINO 300 EPOCH (rmvpe) by MrMocci</b><br />
~ Bannett 400 epoch (havest) by HirumiM<br />
<b>~ BEIDOU 400 EPOCH (rmvpe) by HirumiM<br /></b>
~ Candace 400 epoch (havest) by Mocci24<br />
~ Childe 400 epoch (havest) by HirumiM<br />
<b>~ CHIORI 300 epoch (rmvpe) by Mocci24<br />
~ CLORINDE 225 EPOCH (rmvpe) by Mocci24</b><br />
~ Collei 400 epoch (havest) by HirumiM<br />
<b>~ DEHYA 400 EPOCH (rmvpe) by Mocci24<br />
~ EULA 400 EPOCH (rmvpe) by Mocci24<br />
~ FARUZAN 400 EPOCH (rmvpe) by HirumiM<br />
~ FURINA 375 EPOCH (rmvpe) by Mocci24</b><br />
~ Ganyu 400 epoch (havest) by Mocci24<br />
~ Hutao 400 epoch (havest) by Mocci24<br />
<b>~ JEAN 400 EPOCH (rmvpe) by Mocci24<br/>
~ KAEDEHARA KAZUHA 400 EPOCH (rmvpe) by HirumiM<br /></b>
~ Kamisato Ayaka 1000 epoch (havest) by Mocci24<br />
<b>~ KAMISATO AYAKA 400 EPOCH (rmvpe) by Hirumim & Mocci24<br /></b>
~ Kaveh 400 epoch (havest) by HirumiM<br /> <b>(Note: Set speaker/singer id to 4, this case only applies to the Kaveh models, for other models, it's still same.)</b><br />
~ Keqing 400 epoch (havest) by HirumiM<br />
<b>~ KIRARA 400 EPOCH (rmvpe) by Mocci24<br />
~ KUJO SARA 400 EPOCH (rmvpe) by Mocci24</b><br />
~ Layla 400 epoch (havest) by HirumiM<br />
<b>~ LYNETTE 400 EPOCH (rmvpe) by Mocci24<br />
~ LYNEY 400 EPOCH (rmvpe) by Mocci24<br /></b>
~ Lisa 400 epoch (havest) by Mocci24<br />
~ Lumine 1000 (havest) epoch by HirumiM<br />
~ MAVUIKA 300 EPOCH (rmvpe) by Mocci24<br />
~ Mualani --- epoch (----) by Hanvy12345<br />
~ Nahida 400 epoch (havest) by HirumiM<br />
<b>~ NAVIA 400 EPOCH (rmvpe) by Mocci24<br />
~ NEUVILLETTE 400 EPOCH (rmvpe) by Mocci24<br />
~ NILOU 400 EPOCH (rmvpe) by Mocci24<br />
~ PAIMON 400 EPOCH (rmvpe) by HirumiM<br />
~ RAIDEN EI 400 EPOCH (rmpve) by HirumiM<br />
~ RAIDEN PUPPET 400 EPOCH (rmvpe) by HirumiM<br /></b>
~ Sangonomiya Kokomi 400 epoch (havest) by Mocci24<br />
<b>~ SHENHE 400 EPOCH (rmvpe) by Mocci24<br />
~ VENTI 400 EPOCH (rmpve) by Mocci24</b><br />
~ Warderer 400 epoch (havest) by HirumiM<br />
<b>~ WRIOTHESLEY 350 EPOCH (rmvpe) by Mocci24<br />
~ Xianyun --- epoch (----) by Hanvy12345<br />
~ XIANGLING 400 EPOCH (rmpve) by Mocci24<br /></b>
~ Xiao 400 epoch (havest) by HirumiM<br />
~ Xinyan 400 epoch(havest) by HirumiM<br />
~ Yae Miko 400 epoch (havest) by Mocci24 <br />
~ Yanfei 400 epoch (havest) by HirumiM<br />
<b>~ YELAN 400 EPOCH (rmvpe) by HirumiM<br /></b>
~ Yoimiya 500 epoch (havest) by Mocci24<br />
~ Zhongli 400 epoch (havest) by Mocci24<br />
<br />
----
<br />
<br />
<b>Changes:<br /></b>
<br />- Added models Dehya (rmvpe)
<br />- Added models Kirara (rmvpe)</i>
<br />- Added models Nilou (rmvpe)
<br />- Added models Paimon (rmvpe)
<br />- Added models Lynette (rmpve)
<br />- Added models Venti (rmvpe)
<br />- Added models Navia (rmvpe)
<br />- Added models Neuvillette (rmvpe)
<br />- Added models Faruzan (rmvpe)
<br />- Added models Clorinde (rmvpe) <b>"This model still early, dataset for this model still limited, so maybe the training result not good enough"</b><br />
<br />- Added models Kujo Sara (rmvpe)
<br />- Added models Lyney (rmvpe)
<br />- Added models Shenhe (rmvpe)
<br />- Added models Wriothesley (rmvpe)
<br />- Updated models Furina (rmvpe)
<br />- Added models Chiori (rmvpe) <span style="color: yellow;">NEW!</span>
<br />- Added models Arlecchino (rmvpe) <span style="color: yellow;">NEW!</span>
<br />- Added models Xianyun (unknown) <span style="color: yellow;">NEW!</span>
<br />- Added models Mualani (unknown) <span style="color: yellow;">NEW!</span>
<br />- Added models Mavuika (rmvpe) <span style="color: yellow;">NEW!</span>
----
#####
----
Copy to your colab notebook (run this before run/install requirement.txt):
```bash
!apt install git-lfs
!git lfs install
!git clone https://huggingface.co/mrmocciai/genshin-impact
```
and this
```bash
!git clone https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI.git
!mkdir -p /content/Retrieval-based-Voice-Conversion-WebUI/logs
!cp -r /content/genshin-impact/req/* /content/Retrieval-based-Voice-Conversion-WebUI
!mv /content/genshin-impact/model/* /content/Retrieval-based-Voice-Conversion-WebUI/logs
!mv /content/Retrieval-based-Voice-Conversion-WebUI/logs/weights/* /content/Retrieval-based-Voice-Conversion-WebUI/weights
%cd /content/Retrieval-based-Voice-Conversion-WebUI
!mkdir -p pretrained uvr5_weights
```
<br />
----
|
kavish218/Llama-3.2-1B-Instruct | kavish218 | 2024-10-27T08:49:54Z | 127 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T08:47:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
qnguyen3/nanoLLaVA | qnguyen3 | 2024-10-27T08:48:18Z | 14,381 | 153 | transformers | [
"transformers",
"safetensors",
"llava-qwen2",
"text-generation",
"llava",
"multimodal",
"qwen",
"conversational",
"custom_code",
"en",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-04-04T03:32:53Z | ---
language:
- en
tags:
- llava
- multimodal
- qwen
license: apache-2.0
---
# nanoLLaVA - Sub 1B Vision-Language Model
**IMPORTANT**: **nanoLLaVA-1.5** is out with a much better performance. Please find it [here](https://huggingface.co/qnguyen3/nanoLLaVA-1.5).
<p align="center">
<img src="https://i.postimg.cc/d15k3YNG/nanollava.webp" alt="Logo" width="350">
</p>
## Description
nanoLLaVA is a "small but mighty" 1B vision-language model designed to run efficiently on edge devices.
- **Base LLM**: [Quyen-SE-v0.1](https://huggingface.co/vilm/Quyen-SE-v0.1) (Qwen1.5-0.5B)
- **Vision Encoder**: [google/siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384)
| Model | **VQA v2** | **TextVQA** | **ScienceQA** | **POPE** | **MMMU (Test)** | **MMMU (Eval)** | **GQA** | **MM-VET** |
|---------|--------|---------|-----------|------|-------------|-------------|------|--------|
| Score | 70.84 | 46.71 | 58.97 | 84.1 | 28.6 | 30.4 | 54.79| 23.9 |
## Training Data
Training Data will be released later as I am still writing a paper on this. Expect the final final to be much more powerful than the current one.
## Finetuning Code
Coming Soon!!!
## Usage
You can use with `transformers` with the following script:
```bash
pip install -U transformers accelerate flash_attn
```
```python
import torch
import transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
import warnings
# disable some warnings
transformers.logging.set_verbosity_error()
transformers.logging.disable_progress_bar()
warnings.filterwarnings('ignore')
# set device
torch.set_default_device('cuda') # or 'cpu'
# create model
model = AutoModelForCausalLM.from_pretrained(
'qnguyen3/nanoLLaVA',
torch_dtype=torch.float16,
device_map='auto',
trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(
'qnguyen3/nanoLLaVA',
trust_remote_code=True)
# text prompt
prompt = 'Describe this image in detail'
messages = [
{"role": "user", "content": f'<image>\n{prompt}'}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
print(text)
text_chunks = [tokenizer(chunk).input_ids for chunk in text.split('<image>')]
input_ids = torch.tensor(text_chunks[0] + [-200] + text_chunks[1], dtype=torch.long).unsqueeze(0)
# image, sample images can be found in images folder
image = Image.open('/path/to/image.png')
image_tensor = model.process_images([image], model.config).to(dtype=model.dtype)
# generate
output_ids = model.generate(
input_ids,
images=image_tensor,
max_new_tokens=2048,
use_cache=True)[0]
print(tokenizer.decode(output_ids[input_ids.shape[1]:], skip_special_tokens=True).strip())
```
## Prompt Format
The model follow the ChatML standard, however, without `\n` at the end of `<|im_end|>`:
```
<|im_start|>system
Answer the question<|im_end|><|im_start|>user
<image>
What is the picture about?<|im_end|><|im_start|>assistant
```
---
| Image | Example |
|--------------------------------------|---------------------------------------------------------------------------------------------|
|  | **What is the text saying?** <br> "Small but mighty". <br>**How does the text correlate to the context of the image?** <br> The text seems to be a playful or humorous representation of a small but mighty figure, possibly a mouse or a mouse toy, holding a weightlifting bar. |
---
Model is trained using a modified version from [Bunny](https://github.com/BAAI-DCAI/Bunny/tree/main/bunny) |
jlbaker361/ddpogan_512_wikiart_100_0_50 | jlbaker361 | 2024-10-27T08:47:15Z | 30 | 0 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-10-27T08:45:44Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers pipeline that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alyzbane/convnext-tiny-224-finetuned-barkley | alyzbane | 2024-10-27T08:35:23Z | 384 | 0 | transformers | [
"transformers",
"safetensors",
"convnext",
"image-classification",
"generated_from_trainer",
"base_model:facebook/convnext-tiny-224",
"base_model:finetune:facebook/convnext-tiny-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-10-17T20:47:38Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/convnext-tiny-224
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: convnext-tiny-224-finetuned-barkley
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convnext-tiny-224-finetuned-barkley
This model is a fine-tuned version of [facebook/convnext-tiny-224](https://huggingface.co/facebook/convnext-tiny-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0128
- Precision: 1.0
- Recall: 1.0
- F1: 1.0
- Accuracy: 1.0
- Top1 Accuracy: 1.0
- Error Rate: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | Top1 Accuracy | Error Rate |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|:-------------:|:----------:|
| 1.6288 | 1.0 | 38 | 1.6005 | 0.2133 | 0.2697 | 0.2043 | 0.2371 | 0.2697 | 0.7629 |
| 1.6059 | 2.0 | 76 | 1.5802 | 0.2384 | 0.2763 | 0.2243 | 0.2473 | 0.2763 | 0.7527 |
| 1.5808 | 3.0 | 114 | 1.5570 | 0.2778 | 0.3026 | 0.2595 | 0.2744 | 0.3026 | 0.7256 |
| 1.5555 | 4.0 | 152 | 1.5291 | 0.3831 | 0.375 | 0.3491 | 0.3511 | 0.375 | 0.6489 |
| 1.5232 | 5.0 | 190 | 1.4933 | 0.4252 | 0.4408 | 0.4154 | 0.4147 | 0.4408 | 0.5853 |
| 1.4784 | 6.0 | 228 | 1.4484 | 0.5076 | 0.5197 | 0.4926 | 0.4972 | 0.5197 | 0.5028 |
| 1.4242 | 7.0 | 266 | 1.3902 | 0.6857 | 0.6382 | 0.6307 | 0.6249 | 0.6382 | 0.3751 |
| 1.3586 | 8.0 | 304 | 1.3186 | 0.7728 | 0.7171 | 0.7166 | 0.7134 | 0.7171 | 0.2866 |
| 1.276 | 9.0 | 342 | 1.2236 | 0.8547 | 0.8026 | 0.8109 | 0.8060 | 0.8026 | 0.1940 |
| 1.1778 | 10.0 | 380 | 1.1122 | 0.8899 | 0.8553 | 0.8609 | 0.8601 | 0.8553 | 0.1399 |
| 1.0543 | 11.0 | 418 | 0.9839 | 0.9064 | 0.8947 | 0.8958 | 0.9005 | 0.8947 | 0.0995 |
| 0.921 | 12.0 | 456 | 0.8418 | 0.9541 | 0.9539 | 0.9537 | 0.9575 | 0.9539 | 0.0425 |
| 0.773 | 13.0 | 494 | 0.6935 | 0.9624 | 0.9605 | 0.9605 | 0.9652 | 0.9605 | 0.0348 |
| 0.6204 | 14.0 | 532 | 0.5515 | 0.9688 | 0.9671 | 0.9672 | 0.9708 | 0.9671 | 0.0292 |
| 0.4835 | 15.0 | 570 | 0.4146 | 0.9704 | 0.9671 | 0.9676 | 0.9697 | 0.9671 | 0.0303 |
| 0.3641 | 16.0 | 608 | 0.3043 | 0.9805 | 0.9803 | 0.9802 | 0.9830 | 0.9803 | 0.0170 |
| 0.2706 | 17.0 | 646 | 0.2247 | 0.9805 | 0.9803 | 0.9802 | 0.9830 | 0.9803 | 0.0170 |
| 0.1998 | 18.0 | 684 | 0.1705 | 0.9873 | 0.9868 | 0.9868 | 0.9889 | 0.9868 | 0.0111 |
| 0.1446 | 19.0 | 722 | 0.1271 | 0.9937 | 0.9934 | 0.9934 | 0.9944 | 0.9934 | 0.0056 |
| 0.1106 | 20.0 | 760 | 0.1047 | 0.9873 | 0.9868 | 0.9868 | 0.9889 | 0.9868 | 0.0111 |
| 0.0872 | 21.0 | 798 | 0.0780 | 0.9937 | 0.9934 | 0.9934 | 0.9944 | 0.9934 | 0.0056 |
| 0.0614 | 22.0 | 836 | 0.0739 | 0.9873 | 0.9868 | 0.9868 | 0.9889 | 0.9868 | 0.0111 |
| 0.0491 | 23.0 | 874 | 0.0517 | 0.9937 | 0.9934 | 0.9934 | 0.9944 | 0.9934 | 0.0056 |
| 0.0365 | 24.0 | 912 | 0.0401 | 0.9871 | 0.9868 | 0.9868 | 0.9878 | 0.9868 | 0.0122 |
| 0.0255 | 25.0 | 950 | 0.0336 | 0.9937 | 0.9934 | 0.9934 | 0.9944 | 0.9934 | 0.0056 |
| 0.0212 | 26.0 | 988 | 0.0377 | 0.9873 | 0.9868 | 0.9868 | 0.9889 | 0.9868 | 0.0111 |
| 0.0175 | 27.0 | 1026 | 0.0195 | 0.9937 | 0.9934 | 0.9934 | 0.9944 | 0.9934 | 0.0056 |
| 0.0125 | 28.0 | 1064 | 0.0214 | 0.9936 | 0.9934 | 0.9934 | 0.9933 | 0.9934 | 0.0067 |
| 0.0155 | 29.0 | 1102 | 0.0128 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 0.0 |
| 0.0104 | 30.0 | 1140 | 0.0159 | 0.9937 | 0.9934 | 0.9934 | 0.9944 | 0.9934 | 0.0056 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
bytebeats007/my-gemma-2-finetuned-model | bytebeats007 | 2024-10-27T08:08:52Z | 174 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-10-27T08:05:45Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GawdSB/first-try-poke | GawdSB | 2024-10-27T07:51:35Z | 217 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:GawdSB/moveset_test",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T07:37:13Z | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: first-try-poke
results: []
datasets:
- GawdSB/moveset_test
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# first-try-poke
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.45.2
- Pytorch 2.5.0
- Datasets 3.0.1
- Tokenizers 0.20.1 |
James2313123/L3-Darker-Planet-12.15B_4bpw-h6-EXL2 | James2313123 | 2024-10-27T07:50:31Z | 7 | 0 | null | [
"safetensors",
"llama",
"exl2",
"4bpw",
"en",
"base_model:DavidAU/L3-Darker-Planet-12.15B",
"base_model:quantized:DavidAU/L3-Darker-Planet-12.15B",
"license:apache-2.0",
"4-bit",
"region:us"
] | null | 2024-10-26T04:06:15Z | ---
license: apache-2.0
language:
- en
base_model:
- DavidAU/L3-Darker-Planet-12.15B
quantized_by: James2313123
tags:
- exl2
- 4bpw
---
### Model Description
4bpw-h6-exl2 quant of DavidAU's L3-Darker-Planet-12.15B
Link to orginal model and creator: https://huggingface.co/DavidAU/L3-Darker-Planet-12.15B
|
armageddonz1/salman-singh | armageddonz1 | 2024-10-27T07:50:13Z | 9 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-27T07:50:11Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Salman Singh, Salman
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# Salman Singh
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `Salman Singh, Salman` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
Kishor798/speecht5_finetuned_emirhan_tr | Kishor798 | 2024-10-27T07:36:38Z | 148 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2024-10-24T15:59:06Z | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: speecht5_finetuned_emirhan_tr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_emirhan_tr
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
kh4dien/pythia-70m-deduped-gender | kh4dien | 2024-10-27T07:31:56Z | 265 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T07:13:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/SzilviaB-Daredevil-LongWriter-8B_abliterated-Q6_K-GGUF | Triangle104 | 2024-10-27T06:44:51Z | 6 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:SzilviaB/SzilviaB-Daredevil-LongWriter-8B_abliterated",
"base_model:quantized:SzilviaB/SzilviaB-Daredevil-LongWriter-8B_abliterated",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-27T06:39:49Z | ---
base_model: SzilviaB/SzilviaB-Daredevil-LongWriter-8B_abliterated
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/SzilviaB-Daredevil-LongWriter-8B_abliterated-Q6_K-GGUF
This model was converted to GGUF format from [`SzilviaB/SzilviaB-Daredevil-LongWriter-8B_abliterated`](https://huggingface.co/SzilviaB/SzilviaB-Daredevil-LongWriter-8B_abliterated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SzilviaB/SzilviaB-Daredevil-LongWriter-8B_abliterated) for more details on the model.
---
Model details:
-
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
mlabonne/NeuralDaredevil-8B-abliterated
THUDM/LongWriter-llama3.1-8b
Configuration
The following YAML configuration was used to produce this model:
models:
- model: mlabonne/NeuralDaredevil-8B-abliterated
- model: THUDM/LongWriter-llama3.1-8b
merge_method: slerp
base_model: mlabonne/NeuralDaredevil-8B-abliterated
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/SzilviaB-Daredevil-LongWriter-8B_abliterated-Q6_K-GGUF --hf-file szilviab-daredevil-longwriter-8b_abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/SzilviaB-Daredevil-LongWriter-8B_abliterated-Q6_K-GGUF --hf-file szilviab-daredevil-longwriter-8b_abliterated-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/SzilviaB-Daredevil-LongWriter-8B_abliterated-Q6_K-GGUF --hf-file szilviab-daredevil-longwriter-8b_abliterated-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/SzilviaB-Daredevil-LongWriter-8B_abliterated-Q6_K-GGUF --hf-file szilviab-daredevil-longwriter-8b_abliterated-q6_k.gguf -c 2048
```
|
Beegbrain/act_moss_test_2 | Beegbrain | 2024-10-27T06:38:12Z | 6 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"robotics",
"region:us"
] | robotics | 2024-10-27T06:37:59Z | ---
library_name: lerobot
tags:
- act
- model_hub_mixin
- pytorch_model_hub_mixin
- robotics
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/huggingface/lerobot
- Docs: [More Information Needed] |
RichardErkhov/ockerman0_-_MN-12B-Starcannon-v5.5-unofficial-gguf | RichardErkhov | 2024-10-27T06:26:54Z | 9 | 0 | null | [
"gguf",
"arxiv:2306.01708",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-27T02:46:57Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
MN-12B-Starcannon-v5.5-unofficial - GGUF
- Model creator: https://huggingface.co/ockerman0/
- Original model: https://huggingface.co/ockerman0/MN-12B-Starcannon-v5.5-unofficial/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [MN-12B-Starcannon-v5.5-unofficial.Q2_K.gguf](https://huggingface.co/RichardErkhov/ockerman0_-_MN-12B-Starcannon-v5.5-unofficial-gguf/blob/main/MN-12B-Starcannon-v5.5-unofficial.Q2_K.gguf) | Q2_K | 4.46GB |
| [MN-12B-Starcannon-v5.5-unofficial.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ockerman0_-_MN-12B-Starcannon-v5.5-unofficial-gguf/blob/main/MN-12B-Starcannon-v5.5-unofficial.Q3_K_S.gguf) | Q3_K_S | 5.15GB |
| [MN-12B-Starcannon-v5.5-unofficial.Q3_K.gguf](https://huggingface.co/RichardErkhov/ockerman0_-_MN-12B-Starcannon-v5.5-unofficial-gguf/blob/main/MN-12B-Starcannon-v5.5-unofficial.Q3_K.gguf) | Q3_K | 5.67GB |
| [MN-12B-Starcannon-v5.5-unofficial.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ockerman0_-_MN-12B-Starcannon-v5.5-unofficial-gguf/blob/main/MN-12B-Starcannon-v5.5-unofficial.Q3_K_M.gguf) | Q3_K_M | 5.67GB |
| [MN-12B-Starcannon-v5.5-unofficial.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ockerman0_-_MN-12B-Starcannon-v5.5-unofficial-gguf/blob/main/MN-12B-Starcannon-v5.5-unofficial.Q3_K_L.gguf) | Q3_K_L | 6.11GB |
| [MN-12B-Starcannon-v5.5-unofficial.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ockerman0_-_MN-12B-Starcannon-v5.5-unofficial-gguf/blob/main/MN-12B-Starcannon-v5.5-unofficial.IQ4_XS.gguf) | IQ4_XS | 6.33GB |
| [MN-12B-Starcannon-v5.5-unofficial.Q4_0.gguf](https://huggingface.co/RichardErkhov/ockerman0_-_MN-12B-Starcannon-v5.5-unofficial-gguf/blob/main/MN-12B-Starcannon-v5.5-unofficial.Q4_0.gguf) | Q4_0 | 6.59GB |
| [MN-12B-Starcannon-v5.5-unofficial.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ockerman0_-_MN-12B-Starcannon-v5.5-unofficial-gguf/blob/main/MN-12B-Starcannon-v5.5-unofficial.IQ4_NL.gguf) | IQ4_NL | 6.65GB |
| [MN-12B-Starcannon-v5.5-unofficial.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ockerman0_-_MN-12B-Starcannon-v5.5-unofficial-gguf/blob/main/MN-12B-Starcannon-v5.5-unofficial.Q4_K_S.gguf) | Q4_K_S | 6.63GB |
| [MN-12B-Starcannon-v5.5-unofficial.Q4_K.gguf](https://huggingface.co/RichardErkhov/ockerman0_-_MN-12B-Starcannon-v5.5-unofficial-gguf/blob/main/MN-12B-Starcannon-v5.5-unofficial.Q4_K.gguf) | Q4_K | 6.96GB |
| [MN-12B-Starcannon-v5.5-unofficial.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ockerman0_-_MN-12B-Starcannon-v5.5-unofficial-gguf/blob/main/MN-12B-Starcannon-v5.5-unofficial.Q4_K_M.gguf) | Q4_K_M | 6.96GB |
| [MN-12B-Starcannon-v5.5-unofficial.Q4_1.gguf](https://huggingface.co/RichardErkhov/ockerman0_-_MN-12B-Starcannon-v5.5-unofficial-gguf/blob/main/MN-12B-Starcannon-v5.5-unofficial.Q4_1.gguf) | Q4_1 | 7.26GB |
| [MN-12B-Starcannon-v5.5-unofficial.Q5_0.gguf](https://huggingface.co/RichardErkhov/ockerman0_-_MN-12B-Starcannon-v5.5-unofficial-gguf/blob/main/MN-12B-Starcannon-v5.5-unofficial.Q5_0.gguf) | Q5_0 | 7.93GB |
| [MN-12B-Starcannon-v5.5-unofficial.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ockerman0_-_MN-12B-Starcannon-v5.5-unofficial-gguf/blob/main/MN-12B-Starcannon-v5.5-unofficial.Q5_K_S.gguf) | Q5_K_S | 7.93GB |
| [MN-12B-Starcannon-v5.5-unofficial.Q5_K.gguf](https://huggingface.co/RichardErkhov/ockerman0_-_MN-12B-Starcannon-v5.5-unofficial-gguf/blob/main/MN-12B-Starcannon-v5.5-unofficial.Q5_K.gguf) | Q5_K | 8.13GB |
| [MN-12B-Starcannon-v5.5-unofficial.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ockerman0_-_MN-12B-Starcannon-v5.5-unofficial-gguf/blob/main/MN-12B-Starcannon-v5.5-unofficial.Q5_K_M.gguf) | Q5_K_M | 8.13GB |
| [MN-12B-Starcannon-v5.5-unofficial.Q5_1.gguf](https://huggingface.co/RichardErkhov/ockerman0_-_MN-12B-Starcannon-v5.5-unofficial-gguf/blob/main/MN-12B-Starcannon-v5.5-unofficial.Q5_1.gguf) | Q5_1 | 8.61GB |
| [MN-12B-Starcannon-v5.5-unofficial.Q6_K.gguf](https://huggingface.co/RichardErkhov/ockerman0_-_MN-12B-Starcannon-v5.5-unofficial-gguf/blob/main/MN-12B-Starcannon-v5.5-unofficial.Q6_K.gguf) | Q6_K | 9.37GB |
| [MN-12B-Starcannon-v5.5-unofficial.Q8_0.gguf](https://huggingface.co/RichardErkhov/ockerman0_-_MN-12B-Starcannon-v5.5-unofficial-gguf/blob/main/MN-12B-Starcannon-v5.5-unofficial.Q8_0.gguf) | Q8_0 | 12.13GB |
Original model description:
---
base_model:
- nothingiisreal/MN-12B-Starcannon-v3
library_name: transformers
tags:
- mergekit
- merge
---
# Unofficial continuation of [aetherwiing's](https://huggingface.co/aetherwiing) "Starcannon" series
By request, I made this merge of v3 and v5. It doesn't seem majorly dissimilar from v5, but it does seem to act slightly differently.
static quants at: https://huggingface.co/mradermacher/MN-12B-Starcannon-v5.5-unofficial-GGUF
imatrix quants at: https://huggingface.co/mradermacher/MN-12B-Starcannon-v5.5-unofficial-i1-GGUF
# mergedmodel
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [nothingiisreal/MN-12B-Starcannon-v3](https://huggingface.co/nothingiisreal/MN-12B-Starcannon-v3) as a base.
### Models Merged
The following models were included in the merge:
* /home/ciaran/Documents/Quantisingmodels/testmerges/MN-12B-Starcannon-v5-unofficial/
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /home/ciaran/Documents/Quantisingmodels/testmerges/MN-12B-Starcannon-v5-unofficial/
parameters:
density: 0.25
weight: 0.4
- model: nothingiisreal/MN-12B-Starcannon-v3
parameters:
density: 0.75
weight: 0.6
merge_method: ties
base_model: nothingiisreal/MN-12B-Starcannon-v3
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
```
|
denru/Behemoth-123B-v1.1-5_0bpw-h6-exl2-pippa | denru | 2024-10-27T06:20:18Z | 6 | 2 | null | [
"safetensors",
"mistral",
"license:other",
"5-bit",
"exl2",
"region:us"
] | null | 2024-10-27T03:44:22Z | ---
license: other
---
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## Nearly 2000 members strong 💪
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Behemoth 123B v1.1 🦣 - Creative Edition
*When you spend your whole life living under a dome, even the idea of an ocean seems impossible to imagine.*

## Description
> One of the few other models that's done this for me is the OG Command R 35B. So seeing Behemoth v1.1 have a similar feel to that but with much higher general intelligence really makes it a favourite of mine
> I was real happy with v1.1 the other day. I've done some tests on v1 and it's a lot better.
> v1 had those glimpses of creativity, but now it's more consistent (with v1.1). It feels like a new model in comparison.
> v1 had slop bro. v1.1 makes it irrelevant. The jump is like 720p to 4k. Seriously.
> The creativity for v1.1 is off the charts compared to v1, like it's juiced. v1 had these moments that I would say... 'Shit, let I never seen a model respond with prose like this, let me regenerate to see what else I get.' Now, even though every regeneration had a flow of possibilities, sometimes, those possibilities never came. v1.1 is comparable to xxx for the first time, every generation. It directs and guides the scene, scenario and characters unlike anything else
> It's about the f***ing prose man. The atmosphere that revolves around the characters. Not just the damn dialogue or introspection. v1.1 will pull from a message 7 generations ago. That window I opened will appear in a future response with the noise from the courtyard filtering through it. The experience of not knowing what this model will produce because it's different than anything else is what keeps it engaging.
## Links
- Original: https://huggingface.co/TheDrummer/Behemoth-123B-v1.1
- GGUF: https://huggingface.co/TheDrummer/Behemoth-123B-v1.1-GGUF
- iMatrix: https://huggingface.co/bartowski/Behemoth-123B-v1.1-GGUF (recommended for smaller quants)
## Arsenal (Supported Chat Templates)
- Mistral
- Smart, adaptable, familiar
- Metharme (Pygmalion in ST)
- Creative, unhinged, unique
- Alpaca
- Creative, unique, unhinged
- Text Completion
- You can mix it up and see which works best for you.
### Favorite RP Format
`*action* Dialogue *thoughts* Dialogue *narration*` in 1st person PoV
## What's Next?
- Already have plans for a v2!
## Special Thanks
- Thank you to each and everyone who donated in [Ko-Fi](https://ko-fi.com/thedrummer) to make our venture a little bit easier.
- KinjiHakari777, Dr. Fjut, Kistara, Pseudo, AlexTheVP, Dakkidaze, EvarinSharath'fe, ONTHEREDTEAM, F, Mariana, Garg, Silva, Grozi, & **Phaelon**

<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/FNWdi0WlH-Xd3fjkGVPpp.mpga"></audio>
|
Triangle104/RPLament-22B-Q8_0-GGUF | Triangle104 | 2024-10-27T06:19:31Z | 7 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:SvdH/RPLament-22B",
"base_model:quantized:SvdH/RPLament-22B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-27T06:15:25Z | ---
base_model: SvdH/RPLament-22B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
---
# Triangle104/RPLament-22B-Q8_0-GGUF
This model was converted to GGUF format from [`SvdH/RPLament-22B`](https://huggingface.co/SvdH/RPLament-22B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SvdH/RPLament-22B) for more details on the model.
---
Model details:
-
This is a merge of pre-trained language models created using mergekit.
Merge Method
-
This model was merged using the DARE TIES merge method using ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 as a base.
Models Merged
The following models were included in the merge:
allura-org/MS-Meadowlark-22B
Gryphe/Pantheon-RP-1.6.2-22b-Small
rAIfle/Acolyte-22B
anthracite-org/magnum-v4-22b
Configuration
-
The following YAML configuration was used to produce this model:
merge_method: dare_ties
base_model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
parameters:
int8_mask: true
dtype: bfloat16
models:
- model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
parameters:
weight: 0.30
density: 0.78
- model: anthracite-org/magnum-v4-22b
parameters:
weight: 0.25
density: 0.66
- model: allura-org/MS-Meadowlark-22B
parameters:
weight: 0.20
density: 0.54
- model: rAIfle/Acolyte-22B
parameters:
weight: 0.15
density: 0.42
- model: Gryphe/Pantheon-RP-1.6.2-22b-Small
parameters:
weight: 0.10
density: 0.42
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/RPLament-22B-Q8_0-GGUF --hf-file rplament-22b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/RPLament-22B-Q8_0-GGUF --hf-file rplament-22b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/RPLament-22B-Q8_0-GGUF --hf-file rplament-22b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/RPLament-22B-Q8_0-GGUF --hf-file rplament-22b-q8_0.gguf -c 2048
```
|
glif-loradex-trainer/maxxd4240_PaperDoodles | glif-loradex-trainer | 2024-10-27T06:19:17Z | 496 | 1 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-10-27T06:18:33Z | ---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1730009779344__000002500_0.jpg
text: 'Fae of any gender with metallic wings, sleek armor fused with organic materials,
and eerily beautiful faces that hide cruelty. Their skin is a metallic sheen,
with sharp eyes in unnatural colors, and they wield weapons that seem grown
from the very earth or metal around them PapDoo '
- output:
url: samples/1730009803982__000002500_1.jpg
text: ' man with Border Collie in backyard PapDoo '
- output:
url: samples/1730009828602__000002500_2.jpg
text: 'gorgeous korean woman with white silky long hair and has deer antlers,
wears white camisole dress PapDoo '
- output:
url: samples/1730009853283__000002500_3.jpg
text: Low angle shot of people hugging each other in a circle, leaving a lot of
space in the middle PapDoo
- output:
url: samples/1730009878089__000002500_4.jpg
text: beatles abby road album cover PapDoo
- output:
url: samples/1730009902744__000002500_5.jpg
text: joker playing cards PapDoo
base_model: black-forest-labs/FLUX.1-dev
trigger: PapDoo
instance_prompt: PapDoo
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# PaperDoodles
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `maxxd4240`.
<Gallery />
## Trigger words
You should use `PapDoo` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/maxxd4240_PaperDoodles/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
RichardErkhov/neph1_-_Mistral-Nemo-Instruct-bellman-12b-gguf | RichardErkhov | 2024-10-27T06:08:02Z | 6 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-27T02:28:06Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Mistral-Nemo-Instruct-bellman-12b - GGUF
- Model creator: https://huggingface.co/neph1/
- Original model: https://huggingface.co/neph1/Mistral-Nemo-Instruct-bellman-12b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Mistral-Nemo-Instruct-bellman-12b.Q2_K.gguf](https://huggingface.co/RichardErkhov/neph1_-_Mistral-Nemo-Instruct-bellman-12b-gguf/blob/main/Mistral-Nemo-Instruct-bellman-12b.Q2_K.gguf) | Q2_K | 4.46GB |
| [Mistral-Nemo-Instruct-bellman-12b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/neph1_-_Mistral-Nemo-Instruct-bellman-12b-gguf/blob/main/Mistral-Nemo-Instruct-bellman-12b.Q3_K_S.gguf) | Q3_K_S | 5.15GB |
| [Mistral-Nemo-Instruct-bellman-12b.Q3_K.gguf](https://huggingface.co/RichardErkhov/neph1_-_Mistral-Nemo-Instruct-bellman-12b-gguf/blob/main/Mistral-Nemo-Instruct-bellman-12b.Q3_K.gguf) | Q3_K | 5.67GB |
| [Mistral-Nemo-Instruct-bellman-12b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/neph1_-_Mistral-Nemo-Instruct-bellman-12b-gguf/blob/main/Mistral-Nemo-Instruct-bellman-12b.Q3_K_M.gguf) | Q3_K_M | 5.67GB |
| [Mistral-Nemo-Instruct-bellman-12b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/neph1_-_Mistral-Nemo-Instruct-bellman-12b-gguf/blob/main/Mistral-Nemo-Instruct-bellman-12b.Q3_K_L.gguf) | Q3_K_L | 6.11GB |
| [Mistral-Nemo-Instruct-bellman-12b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/neph1_-_Mistral-Nemo-Instruct-bellman-12b-gguf/blob/main/Mistral-Nemo-Instruct-bellman-12b.IQ4_XS.gguf) | IQ4_XS | 6.33GB |
| [Mistral-Nemo-Instruct-bellman-12b.Q4_0.gguf](https://huggingface.co/RichardErkhov/neph1_-_Mistral-Nemo-Instruct-bellman-12b-gguf/blob/main/Mistral-Nemo-Instruct-bellman-12b.Q4_0.gguf) | Q4_0 | 6.59GB |
| [Mistral-Nemo-Instruct-bellman-12b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/neph1_-_Mistral-Nemo-Instruct-bellman-12b-gguf/blob/main/Mistral-Nemo-Instruct-bellman-12b.IQ4_NL.gguf) | IQ4_NL | 6.65GB |
| [Mistral-Nemo-Instruct-bellman-12b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/neph1_-_Mistral-Nemo-Instruct-bellman-12b-gguf/blob/main/Mistral-Nemo-Instruct-bellman-12b.Q4_K_S.gguf) | Q4_K_S | 6.63GB |
| [Mistral-Nemo-Instruct-bellman-12b.Q4_K.gguf](https://huggingface.co/RichardErkhov/neph1_-_Mistral-Nemo-Instruct-bellman-12b-gguf/blob/main/Mistral-Nemo-Instruct-bellman-12b.Q4_K.gguf) | Q4_K | 6.96GB |
| [Mistral-Nemo-Instruct-bellman-12b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/neph1_-_Mistral-Nemo-Instruct-bellman-12b-gguf/blob/main/Mistral-Nemo-Instruct-bellman-12b.Q4_K_M.gguf) | Q4_K_M | 6.96GB |
| [Mistral-Nemo-Instruct-bellman-12b.Q4_1.gguf](https://huggingface.co/RichardErkhov/neph1_-_Mistral-Nemo-Instruct-bellman-12b-gguf/blob/main/Mistral-Nemo-Instruct-bellman-12b.Q4_1.gguf) | Q4_1 | 7.26GB |
| [Mistral-Nemo-Instruct-bellman-12b.Q5_0.gguf](https://huggingface.co/RichardErkhov/neph1_-_Mistral-Nemo-Instruct-bellman-12b-gguf/blob/main/Mistral-Nemo-Instruct-bellman-12b.Q5_0.gguf) | Q5_0 | 7.93GB |
| [Mistral-Nemo-Instruct-bellman-12b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/neph1_-_Mistral-Nemo-Instruct-bellman-12b-gguf/blob/main/Mistral-Nemo-Instruct-bellman-12b.Q5_K_S.gguf) | Q5_K_S | 7.93GB |
| [Mistral-Nemo-Instruct-bellman-12b.Q5_K.gguf](https://huggingface.co/RichardErkhov/neph1_-_Mistral-Nemo-Instruct-bellman-12b-gguf/blob/main/Mistral-Nemo-Instruct-bellman-12b.Q5_K.gguf) | Q5_K | 8.13GB |
| [Mistral-Nemo-Instruct-bellman-12b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/neph1_-_Mistral-Nemo-Instruct-bellman-12b-gguf/blob/main/Mistral-Nemo-Instruct-bellman-12b.Q5_K_M.gguf) | Q5_K_M | 8.13GB |
| [Mistral-Nemo-Instruct-bellman-12b.Q5_1.gguf](https://huggingface.co/RichardErkhov/neph1_-_Mistral-Nemo-Instruct-bellman-12b-gguf/blob/main/Mistral-Nemo-Instruct-bellman-12b.Q5_1.gguf) | Q5_1 | 8.61GB |
| [Mistral-Nemo-Instruct-bellman-12b.Q6_K.gguf](https://huggingface.co/RichardErkhov/neph1_-_Mistral-Nemo-Instruct-bellman-12b-gguf/blob/main/Mistral-Nemo-Instruct-bellman-12b.Q6_K.gguf) | Q6_K | 9.37GB |
| [Mistral-Nemo-Instruct-bellman-12b.Q8_0.gguf](https://huggingface.co/RichardErkhov/neph1_-_Mistral-Nemo-Instruct-bellman-12b-gguf/blob/main/Mistral-Nemo-Instruct-bellman-12b.Q8_0.gguf) | Q8_0 | 12.13GB |
Original model description:
---
language:
- sv
license: apache-2.0
library_name: transformers
tags:
- unsloth
datasets:
- neph1/bellman-7b-finetune
- neph1/codefeedback-swedish
base_model:
- mistralai/Mistral-Nemo-Instruct-2407
---
# Model Card for Bellman
This version of bellman is finetuned from Mistral-Nemo-Instruct-2407.
It's a rank 128 qlora trained for about 1 epoch.
It's finetuned for prompt question answering, based on a dataset created from
Swedish wikipedia, with a lot of Sweden-centric questions.
New from previous versions is questions from a translated code-feedback dataset, as well as a number of stories.
Consider this a work in progress as I adjust the training for this new model size. Will provide a few updates to the model.
For GGUFs, please look to: <br> https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-bellman-12b-GGUF and <br>
https://huggingface.co/mradermacher/Mistral-Nemo-Instruct-bellman-12b-i1-GGUF

[](https://ko-fi.com/T6T3S8VXY)
## Model Details
Rank: 128
Trained with Unsloth on a 3090
Differences from base model:
The base model is pretty good at Swedish already, but my 'vibe check' says this finetune is slightly improved. There are less weird wordings.
Bellman is trained on fairly short answers and tends to be less verbose.
### Training Parameters
per_device_train_batch_size = 2,<br>
gradient_accumulation_steps = 64,<br>
num_train_epochs=3,<br>
warmup_steps = 5,<br>
learning_rate = 1e-4,<br>
logging_steps = 15,<br>
optim = "adamw_8bit",<br>
weight_decay = 0.01,<br>
lr_scheduler_type = "linear",<br>
seed = 3407,<br>
per_device_eval_batch_size = 2,<br>
evaluation_strategy="steps",<br>
eval_accumulation_steps = 64,<br>
eval_steps = 15,<br>
eval_delay = 0,<br>
save_strategy="steps",<br>
save_steps=50,<br>
### Model Description
- **Developed by:** Me
- **Funded by:** Me
- **Model type:** Instruct
- **Language(s) (NLP):** Swedish
- **License:** Apache 2 License
- **Finetuned from model:** Mistral-Nemo-Instruct-2407
## Model Card Contact
[email protected]
|
sdadas/mmlw-retrieval-e5-large | sdadas | 2024-10-27T06:04:00Z | 32 | 3 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"information-retrieval",
"pl",
"arxiv:2402.13350",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-10-18T19:02:55Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- information-retrieval
language: pl
license: apache-2.0
widget:
- source_sentence: "query: Jak dożyć 100 lat?"
sentences:
- "passage: Trzeba zdrowo się odżywiać i uprawiać sport."
- "passage: Trzeba pić alkohol, imprezować i jeździć szybkimi autami."
- "passage: Gdy trwała kampania politycy zapewniali, że rozprawią się z zakazem niedzielnego handlu."
---
<h1 align="center">MMLW-retrieval-e5-large</h1>
MMLW (muszę mieć lepszą wiadomość) are neural text encoders for Polish.
This model is optimized for information retrieval tasks. It can transform queries and passages to 1024 dimensional vectors.
The model was developed using a two-step procedure:
- In the first step, it was initialized with multilingual E5 checkpoint, and then trained with [multilingual knowledge distillation method](https://aclanthology.org/2020.emnlp-main.365/) on a diverse corpus of 60 million Polish-English text pairs. We utilised [English FlagEmbeddings (BGE)](https://huggingface.co/BAAI/bge-large-en) as teacher models for distillation.
- The second step involved fine-tuning the obtained models with contrastrive loss on [Polish MS MARCO](https://huggingface.co/datasets/clarin-knext/msmarco-pl) training split. In order to improve the efficiency of contrastive training, we used large batch sizes - 1152 for small, 768 for base, and 288 for large models. Fine-tuning was conducted on a cluster of 12 A100 GPUs.
⚠️ **2023-12-26:** We have updated the model to a new version with improved results. You can still download the previous version using the **v1** tag: `AutoModel.from_pretrained("sdadas/mmlw-retrieval-e5-large", revision="v1")` ⚠️
## Usage (Sentence-Transformers)
⚠️ Our dense retrievers require the use of specific prefixes and suffixes when encoding texts. For this model, queries should be prefixed with **"query: "** and passages with **"passage: "** ⚠️
You can use the model like this with [sentence-transformers](https://www.SBERT.net):
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
query_prefix = "query: "
answer_prefix = "passage: "
queries = [query_prefix + "Jak dożyć 100 lat?"]
answers = [
answer_prefix + "Trzeba zdrowo się odżywiać i uprawiać sport.",
answer_prefix + "Trzeba pić alkohol, imprezować i jeździć szybkimi autami.",
answer_prefix + "Gdy trwała kampania politycy zapewniali, że rozprawią się z zakazem niedzielnego handlu."
]
model = SentenceTransformer("sdadas/mmlw-retrieval-e5-large")
queries_emb = model.encode(queries, convert_to_tensor=True, show_progress_bar=False)
answers_emb = model.encode(answers, convert_to_tensor=True, show_progress_bar=False)
best_answer = cos_sim(queries_emb, answers_emb).argmax().item()
print(answers[best_answer])
# Trzeba zdrowo się odżywiać i uprawiać sport.
```
## Evaluation Results
The model achieves **NDCG@10** of **58.30** on the Polish Information Retrieval Benchmark. See [PIRB Leaderboard](https://huggingface.co/spaces/sdadas/pirb) for detailed results.
## Acknowledgements
This model was trained with the A100 GPU cluster support delivered by the Gdansk University of Technology within the TASK center initiative.
## Citation
```bibtex
@article{dadas2024pirb,
title={{PIRB}: A Comprehensive Benchmark of Polish Dense and Hybrid Text Retrieval Methods},
author={Sławomir Dadas and Michał Perełkiewicz and Rafał Poświata},
year={2024},
eprint={2402.13350},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
sdadas/mmlw-retrieval-e5-base | sdadas | 2024-10-27T06:03:35Z | 32 | 1 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"information-retrieval",
"pl",
"arxiv:2402.13350",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2023-10-18T19:00:04Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- information-retrieval
language: pl
license: apache-2.0
widget:
- source_sentence: "query: Jak dożyć 100 lat?"
sentences:
- "passage: Trzeba zdrowo się odżywiać i uprawiać sport."
- "passage: Trzeba pić alkohol, imprezować i jeździć szybkimi autami."
- "passage: Gdy trwała kampania politycy zapewniali, że rozprawią się z zakazem niedzielnego handlu."
---
<h1 align="center">MMLW-retrieval-e5-base</h1>
MMLW (muszę mieć lepszą wiadomość) are neural text encoders for Polish.
This model is optimized for information retrieval tasks. It can transform queries and passages to 768 dimensional vectors.
The model was developed using a two-step procedure:
- In the first step, it was initialized with multilingual E5 checkpoint, and then trained with [multilingual knowledge distillation method](https://aclanthology.org/2020.emnlp-main.365/) on a diverse corpus of 60 million Polish-English text pairs. We utilised [English FlagEmbeddings (BGE)](https://huggingface.co/BAAI/bge-base-en) as teacher models for distillation.
- The second step involved fine-tuning the obtained models with contrastrive loss on [Polish MS MARCO](https://huggingface.co/datasets/clarin-knext/msmarco-pl) training split. In order to improve the efficiency of contrastive training, we used large batch sizes - 1152 for small, 768 for base, and 288 for large models. Fine-tuning was conducted on a cluster of 12 A100 GPUs.
⚠️ **2023-12-26:** We have updated the model to a new version with improved results. You can still download the previous version using the **v1** tag: `AutoModel.from_pretrained("sdadas/mmlw-retrieval-e5-base", revision="v1")` ⚠️
## Usage (Sentence-Transformers)
⚠️ Our dense retrievers require the use of specific prefixes and suffixes when encoding texts. For this model, queries should be prefixed with **"query: "** and passages with **"passage: "** ⚠️
You can use the model like this with [sentence-transformers](https://www.SBERT.net):
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
query_prefix = "query: "
answer_prefix = "passage: "
queries = [query_prefix + "Jak dożyć 100 lat?"]
answers = [
answer_prefix + "Trzeba zdrowo się odżywiać i uprawiać sport.",
answer_prefix + "Trzeba pić alkohol, imprezować i jeździć szybkimi autami.",
answer_prefix + "Gdy trwała kampania politycy zapewniali, że rozprawią się z zakazem niedzielnego handlu."
]
model = SentenceTransformer("sdadas/mmlw-retrieval-e5-base")
queries_emb = model.encode(queries, convert_to_tensor=True, show_progress_bar=False)
answers_emb = model.encode(answers, convert_to_tensor=True, show_progress_bar=False)
best_answer = cos_sim(queries_emb, answers_emb).argmax().item()
print(answers[best_answer])
# Trzeba zdrowo się odżywiać i uprawiać sport.
```
## Evaluation Results
The model achieves **NDCG@10** of **56.09** on the Polish Information Retrieval Benchmark. See [PIRB Leaderboard](https://huggingface.co/spaces/sdadas/pirb) for detailed results.
## Acknowledgements
This model was trained with the A100 GPU cluster support delivered by the Gdansk University of Technology within the TASK center initiative.
## Citation
```bibtex
@article{dadas2024pirb,
title={{PIRB}: A Comprehensive Benchmark of Polish Dense and Hybrid Text Retrieval Methods},
author={Sławomir Dadas and Michał Perełkiewicz and Rafał Poświata},
year={2024},
eprint={2402.13350},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
lanxiaofeng/yfl5 | lanxiaofeng | 2024-10-27T06:00:40Z | 14 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-10-27T06:00:23Z | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/yfl5_001280_00_20241027055646.png
text: TOK,Professional headshot of a young Asian man in his 20s with short black
hair and bangs, warm brown eyes, and a gentle smile. Subject wearing a navy
blue turtleneck sweater, positioned slightly off-center in a three-quarter
view. Soft, natural lighting emphasizing facial features. Blurred background
suggesting a modern office or home setting. High-resolution 4K image with
shallow depth of field, captured with an 85mm lens. Subtle retouching for
a polished yet authentic LinkedIn profile picture, conveying approachability
and professionalism.
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: TOK,
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# yfl5
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `TOK,` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
safe049/ParuMaid-Claude-Chinese-8B | safe049 | 2024-10-27T05:52:12Z | 12 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:Undi95/Meta-Llama-3.1-8B-Claude",
"base_model:merge:Undi95/Meta-Llama-3.1-8B-Claude",
"base_model:safe049/ParuMaid-Llama3-Chinese-8B",
"base_model:merge:safe049/ParuMaid-Llama3-Chinese-8B",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T03:58:47Z | ---
base_model:
- Undi95/Meta-Llama-3.1-8B-Claude
- safe049/ParuMaid-Llama3-Chinese-8B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [safe049/ParuMaid-Llama3-Chinese-8B](https://huggingface.co/safe049/ParuMaid-Llama3-Chinese-8B) as a base.
### Models Merged
The following models were included in the merge:
* [Undi95/Meta-Llama-3.1-8B-Claude](https://huggingface.co/Undi95/Meta-Llama-3.1-8B-Claude)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
# 使用 ties 方法进行模型合并的配置文件
base_model: "safe049/ParuMaid-Llama3-Chinese-8B" # 设置 ParuMaid-Llama3-Chinese-8B 作为基础模型
models:
- model: "safe049/ParuMaid-Llama3-Chinese-8B"
- model: "Undi95/Meta-Llama-3.1-8B-Claude"
parameters:
density: 0.5 # 控制合并中的稀疏度
weight: 0.5 # 合并权重,控制该模型对输出的影响
merge_method: ties # 选择 ties 作为合并方法
parameters:
normalize: false # 是否进行归一化处理
int8_mask: true # 使用 int8 掩码以降低内存占用
dtype: float16 # 设置为半精度浮点数以减少显存需求
```
|
safe049/ParuMaid-Llama3-Chinese-8B | safe049 | 2024-10-27T05:36:21Z | 135 | 1 | transformers | [
"transformers",
"pytorch",
"gguf",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"dataset:neifuisan/Neuro-sama-QnA",
"base_model:safe049/Ruozhiba_llama3",
"base_model:quantized:safe049/Ruozhiba_llama3",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-19T13:13:31Z | ---
base_model: safe049/Ruozhiba_llama3
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
datasets:
- neifuisan/Neuro-sama-QnA
---

# Uploaded model
- **Developed by:** safe049
- **License:** apache-2.0
- **Finetuned from model :** safe049/Ruozhiba_llama3
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# Informations
**This model is uncensored and based on Undi95/Llama-3-LewdPlay-8B which is uncensored.**
It is not fully uncensored,you need a prompt like dolphin or something else to fully uncensor it
Trained on Google Colab
Just for boring,Using prompt from Github repository: https://github.com/kimjammer/Neuro/blob/master/Neuro.yaml
Trained with Ruozhiba Dataset,so it have fantastic Chinese conversation ability and good logical thinks
# Use
Just see the "use the model" in this page
I'm using the Q4KM GGUF in ollama. |
RichardErkhov/anthracite-org_-_magnum-v4-12b-gguf | RichardErkhov | 2024-10-27T05:31:42Z | 8 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-27T01:57:49Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
magnum-v4-12b - GGUF
- Model creator: https://huggingface.co/anthracite-org/
- Original model: https://huggingface.co/anthracite-org/magnum-v4-12b/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [magnum-v4-12b.Q2_K.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v4-12b-gguf/blob/main/magnum-v4-12b.Q2_K.gguf) | Q2_K | 4.46GB |
| [magnum-v4-12b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v4-12b-gguf/blob/main/magnum-v4-12b.Q3_K_S.gguf) | Q3_K_S | 5.15GB |
| [magnum-v4-12b.Q3_K.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v4-12b-gguf/blob/main/magnum-v4-12b.Q3_K.gguf) | Q3_K | 5.67GB |
| [magnum-v4-12b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v4-12b-gguf/blob/main/magnum-v4-12b.Q3_K_M.gguf) | Q3_K_M | 5.67GB |
| [magnum-v4-12b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v4-12b-gguf/blob/main/magnum-v4-12b.Q3_K_L.gguf) | Q3_K_L | 6.11GB |
| [magnum-v4-12b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v4-12b-gguf/blob/main/magnum-v4-12b.IQ4_XS.gguf) | IQ4_XS | 6.33GB |
| [magnum-v4-12b.Q4_0.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v4-12b-gguf/blob/main/magnum-v4-12b.Q4_0.gguf) | Q4_0 | 6.59GB |
| [magnum-v4-12b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v4-12b-gguf/blob/main/magnum-v4-12b.IQ4_NL.gguf) | IQ4_NL | 6.65GB |
| [magnum-v4-12b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v4-12b-gguf/blob/main/magnum-v4-12b.Q4_K_S.gguf) | Q4_K_S | 6.63GB |
| [magnum-v4-12b.Q4_K.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v4-12b-gguf/blob/main/magnum-v4-12b.Q4_K.gguf) | Q4_K | 6.96GB |
| [magnum-v4-12b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v4-12b-gguf/blob/main/magnum-v4-12b.Q4_K_M.gguf) | Q4_K_M | 6.96GB |
| [magnum-v4-12b.Q4_1.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v4-12b-gguf/blob/main/magnum-v4-12b.Q4_1.gguf) | Q4_1 | 7.26GB |
| [magnum-v4-12b.Q5_0.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v4-12b-gguf/blob/main/magnum-v4-12b.Q5_0.gguf) | Q5_0 | 7.93GB |
| [magnum-v4-12b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v4-12b-gguf/blob/main/magnum-v4-12b.Q5_K_S.gguf) | Q5_K_S | 7.93GB |
| [magnum-v4-12b.Q5_K.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v4-12b-gguf/blob/main/magnum-v4-12b.Q5_K.gguf) | Q5_K | 8.13GB |
| [magnum-v4-12b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v4-12b-gguf/blob/main/magnum-v4-12b.Q5_K_M.gguf) | Q5_K_M | 8.13GB |
| [magnum-v4-12b.Q5_1.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v4-12b-gguf/blob/main/magnum-v4-12b.Q5_1.gguf) | Q5_1 | 8.61GB |
| [magnum-v4-12b.Q6_K.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v4-12b-gguf/blob/main/magnum-v4-12b.Q6_K.gguf) | Q6_K | 9.37GB |
| [magnum-v4-12b.Q8_0.gguf](https://huggingface.co/RichardErkhov/anthracite-org_-_magnum-v4-12b-gguf/blob/main/magnum-v4-12b.Q8_0.gguf) | Q8_0 | 12.13GB |
Original model description:
---
license: apache-2.0
language:
- en
tags:
- chat
pipeline_tag: text-generation
library_name: transformers
---

This is a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
This model is fine-tuned on top of [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407).
## Prompting
A typical input would look like this:
```py
<s>[INST] SYSTEM MESSAGE
USER MESSAGE[/INST] ASSISTANT MESSAGE</s>[INST] USER MESSAGE[/INST]
```
## SillyTavern templates
Below are Instruct and Context templates for use within SillyTavern.
<details><summary>context template</summary>
```yaml
default SillyTavern template works fine
```
</details><br>
<details><summary>instruct template</summary>
```yaml
default SillyTavern template works fine
```
</details><br>
## Axolotl config
<details><summary>See axolotl config</summary>
```yaml
base_model: mistralai/Mistral-Nemo-Instruct-2407
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
hub_model_id: anthracite-org/magnum-v4-12b-r2
hub_strategy: "all_checkpoints"
push_dataset_to_hub:
hf_use_auth_token: true
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: anthracite-org/c2_logs_32k_llama3_qwen2_v1.2_no_system
type: custommistralv3tekken
- path: anthracite-org/kalo-opus-instruct-22k-no-refusal-no-system
type: custommistralv3tekken
- path: anthracite-org/kalo-opus-instruct-3k-filtered-no-system
type: custommistralv3tekken
- path: anthracite-org/nopm_claude_writing_fixed
type: custommistralv3tekken
- path: anthracite-org/kalo_opus_misc_240827_no_system
type: custommistralv3tekken
- path: anthracite-org/kalo_misc_part2_no_system
type: custommistralv3tekken
#chat_template: chatml
shuffle_merged_datasets: true
#default_system_message: "You are an assistant that responds to the user."
dataset_prepared_path: /workspace/data/magnum-12b-data
val_set_size: 0.0
output_dir: /workspace/data/12b-fft-out
sequence_len: 32768
sample_packing: true
pad_to_sequence_len: true
adapter:
lora_model_dir:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
lora_fan_in_fan_out:
wandb_project: 12b-magnum-fft
wandb_entity:
wandb_watch:
wandb_name: v4-r2-attempt-01
wandb_log_model:
gradient_accumulation_steps: 2
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.00001
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 40
evals_per_epoch:
eval_table_size:
eval_max_new_tokens:
saves_per_epoch: 2
debug:
deepspeed: deepspeed_configs/zero2.json
weight_decay: 0.1
fsdp:
fsdp_config:
special_tokens:
pad_token: <pad>
```
</details><br>
## Credits
We'd like to thank Recursal / Featherless for sponsoring the compute for this train, Featherless has been hosting our Magnum models since the first 72 B and has given thousands of people access to our models and helped us grow.
We would also like to thank all members of Anthracite who made this finetune possible.
## Datasets
- [anthracite-org/c2_logs_32k_llama3_qwen2_v1.2_no_system](https://huggingface.co/datasets/anthracite-org/c2_logs_32k_llama3_qwen2_v1.2_no_system)
- [anthracite-org/kalo-opus-instruct-22k-no-refusal-no-system](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal-no-system)
- [anthracite-org/kalo-opus-instruct-3k-filtered-no-system](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-3k-filtered-no-system)
- [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed)
- [anthracite-org/kalo_opus_misc_240827_no_system](https://huggingface.co/datasets/anthracite-org/kalo_opus_misc_240827_no_system)
- [anthracite-org/kalo_misc_part2_no_system](https://huggingface.co/datasets/anthracite-org/kalo_misc_part2_no_system)
## Training
The training was done for 2 epochs. We used 8x[H100s](https://www.nvidia.com/en-us/data-center/h100/) GPUs graciously provided by [Recursal AI](https://recursal.ai/) / [Featherless AI](https://featherless.ai/) for the full-parameter fine-tuning of the model.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
## Safety
...
|
Triangle104/RPLament-22B-Q4_K_M-GGUF | Triangle104 | 2024-10-27T05:24:02Z | 14 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:SvdH/RPLament-22B",
"base_model:quantized:SvdH/RPLament-22B",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-27T05:21:51Z | ---
base_model: SvdH/RPLament-22B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
---
# Triangle104/RPLament-22B-Q4_K_M-GGUF
This model was converted to GGUF format from [`SvdH/RPLament-22B`](https://huggingface.co/SvdH/RPLament-22B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SvdH/RPLament-22B) for more details on the model.
---
Model details:
-
This is a merge of pre-trained language models created using mergekit.
Merge Method
-
This model was merged using the DARE TIES merge method using ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 as a base.
Models Merged
The following models were included in the merge:
allura-org/MS-Meadowlark-22B
Gryphe/Pantheon-RP-1.6.2-22b-Small
rAIfle/Acolyte-22B
anthracite-org/magnum-v4-22b
Configuration
The following YAML configuration was used to produce this model:
merge_method: dare_ties
base_model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
parameters:
int8_mask: true
dtype: bfloat16
models:
- model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
parameters:
weight: 0.30
density: 0.78
- model: anthracite-org/magnum-v4-22b
parameters:
weight: 0.25
density: 0.66
- model: allura-org/MS-Meadowlark-22B
parameters:
weight: 0.20
density: 0.54
- model: rAIfle/Acolyte-22B
parameters:
weight: 0.15
density: 0.42
- model: Gryphe/Pantheon-RP-1.6.2-22b-Small
parameters:
weight: 0.10
density: 0.42
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/RPLament-22B-Q4_K_M-GGUF --hf-file rplament-22b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/RPLament-22B-Q4_K_M-GGUF --hf-file rplament-22b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/RPLament-22B-Q4_K_M-GGUF --hf-file rplament-22b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/RPLament-22B-Q4_K_M-GGUF --hf-file rplament-22b-q4_k_m.gguf -c 2048
```
|
Primeness/DeezNutz1 | Primeness | 2024-10-27T05:10:26Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T04:06:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Triangle104/Magnum_Backyard_Party_12b-Q8_0-GGUF | Triangle104 | 2024-10-27T04:27:59Z | 7 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:SzilviaB/Magnum_Backyard_Party_12b",
"base_model:quantized:SzilviaB/Magnum_Backyard_Party_12b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-27T04:26:14Z | ---
base_model: SzilviaB/Magnum_Backyard_Party_12b
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Triangle104/Magnum_Backyard_Party_12b-Q8_0-GGUF
This model was converted to GGUF format from [`SzilviaB/Magnum_Backyard_Party_12b`](https://huggingface.co/SzilviaB/Magnum_Backyard_Party_12b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SzilviaB/Magnum_Backyard_Party_12b) for more details on the model.
---
Model details:
-
This is a merge of pre-trained language models created using mergekit.
Merge Method
-
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Sao10K/MN-BackyardAI-Party-12B-v1
anthracite-org/magnum-v4-12b
Configuration
-
The following YAML configuration was used to produce this model:
models:
- model: anthracite-org/magnum-v4-12b
- model: Sao10K/MN-BackyardAI-Party-12B-v1
merge_method: slerp
base_model: anthracite-org/magnum-v4-12b
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Magnum_Backyard_Party_12b-Q8_0-GGUF --hf-file magnum_backyard_party_12b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Magnum_Backyard_Party_12b-Q8_0-GGUF --hf-file magnum_backyard_party_12b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Magnum_Backyard_Party_12b-Q8_0-GGUF --hf-file magnum_backyard_party_12b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Magnum_Backyard_Party_12b-Q8_0-GGUF --hf-file magnum_backyard_party_12b-q8_0.gguf -c 2048
```
|
mergekit-community/mergekit-slerp-ocvgfla | mergekit-community | 2024-10-27T04:15:55Z | 127 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Coder-1.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T04:15:12Z | ---
base_model:
- Qwen/Qwen2.5-Coder-1.5B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Qwen/Qwen2.5-Coder-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Qwen/Qwen2.5-Coder-1.5B-Instruct
- model: Qwen/Qwen2.5-Coder-1.5B-Instruct
merge_method: slerp
base_model: Qwen/Qwen2.5-Coder-1.5B-Instruct
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0]
```
|
Orion-zhen/aya-expanse-32b-AWQ | Orion-zhen | 2024-10-27T04:06:26Z | 615 | 2 | transformers | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"el",
"fa",
"pl",
"id",
"cs",
"he",
"hi",
"nl",
"ro",
"ru",
"tr",
"uk",
"vi",
"base_model:CohereForAI/aya-expanse-32b",
"base_model:quantized:CohereForAI/aya-expanse-32b",
"license:gpl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-10-26T14:45:47Z | ---
license: gpl-3.0
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
base_model:
- CohereForAI/aya-expanse-32b
---
# aya-expanse-32b-AWQ
AWQ quantization of CohereForAI/aya-expanse-32b |
jkazdan/llama8b-gsm-real-sftsd2 | jkazdan | 2024-10-27T04:04:01Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T06:36:30Z | ---
library_name: transformers
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: llama8b-gsm-real-sftsd2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama8b-gsm-real-sftsd2
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0758
- Num Input Tokens Seen: 1230344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 2
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:----:|:---------------:|:-----------------:|
| No log | 0 | 0 | 1.8595 | 0 |
| 1.7928 | 0.0214 | 5 | 1.6692 | 24998 |
| 1.2768 | 0.0428 | 10 | 1.3468 | 51990 |
| 1.248 | 0.0642 | 15 | 1.2108 | 78552 |
| 1.183 | 0.0856 | 20 | 1.1767 | 104714 |
| 1.1417 | 0.1070 | 25 | 1.1611 | 130644 |
| 1.1608 | 0.1284 | 30 | 1.1526 | 157452 |
| 1.1661 | 0.1499 | 35 | 1.1440 | 183464 |
| 1.0883 | 0.1713 | 40 | 1.1382 | 208708 |
| 1.1298 | 0.1927 | 45 | 1.1333 | 234812 |
| 1.0514 | 0.2141 | 50 | 1.1295 | 260646 |
| 1.2335 | 0.2355 | 55 | 1.1261 | 286452 |
| 1.1238 | 0.2569 | 60 | 1.1214 | 313702 |
| 1.1498 | 0.2783 | 65 | 1.1190 | 339404 |
| 1.0992 | 0.2997 | 70 | 1.1170 | 366220 |
| 1.1073 | 0.3211 | 75 | 1.1143 | 391672 |
| 1.0477 | 0.3425 | 80 | 1.1115 | 418874 |
| 1.0637 | 0.3639 | 85 | 1.1097 | 444640 |
| 1.1512 | 0.3853 | 90 | 1.1077 | 472012 |
| 1.0145 | 0.4067 | 95 | 1.1054 | 498068 |
| 1.0404 | 0.4282 | 100 | 1.1038 | 524766 |
| 1.1086 | 0.4496 | 105 | 1.1029 | 550330 |
| 1.17 | 0.4710 | 110 | 1.1008 | 577238 |
| 1.0603 | 0.4924 | 115 | 1.1005 | 605334 |
| 1.0688 | 0.5138 | 120 | 1.0980 | 630636 |
| 1.032 | 0.5352 | 125 | 1.0974 | 655926 |
| 1.0415 | 0.5566 | 130 | 1.0953 | 683354 |
| 0.9503 | 0.5780 | 135 | 1.0945 | 711322 |
| 1.076 | 0.5994 | 140 | 1.0925 | 736596 |
| 1.0654 | 0.6208 | 145 | 1.0911 | 762078 |
| 1.0001 | 0.6422 | 150 | 1.0893 | 788874 |
| 1.1013 | 0.6636 | 155 | 1.0883 | 814254 |
| 1.0949 | 0.6850 | 160 | 1.0876 | 841134 |
| 1.1224 | 0.7064 | 165 | 1.0869 | 868964 |
| 1.1155 | 0.7279 | 170 | 1.0865 | 895250 |
| 1.0823 | 0.7493 | 175 | 1.0844 | 921904 |
| 1.0606 | 0.7707 | 180 | 1.0840 | 948558 |
| 1.089 | 0.7921 | 185 | 1.0835 | 973804 |
| 1.1386 | 0.8135 | 190 | 1.0828 | 1000896 |
| 1.1573 | 0.8349 | 195 | 1.0819 | 1027862 |
| 1.0802 | 0.8563 | 200 | 1.0800 | 1053914 |
| 1.0364 | 0.8777 | 205 | 1.0793 | 1080370 |
| 1.0947 | 0.8991 | 210 | 1.0786 | 1107266 |
| 1.074 | 0.9205 | 215 | 1.0778 | 1134620 |
| 1.0255 | 0.9419 | 220 | 1.0779 | 1161034 |
| 1.0109 | 0.9633 | 225 | 1.0763 | 1187784 |
| 1.0732 | 0.9847 | 230 | 1.0764 | 1213208 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.4.1.post300
- Datasets 2.20.0
- Tokenizers 0.20.1
|
rupeshs/sd15-lcm-square-openvino-int8 | rupeshs | 2024-10-27T03:29:13Z | 67 | 1 | null | [
"license:mit",
"region:us"
] | null | 2024-08-31T14:42:58Z | ---
license: mit
---
Original model : [sd-1.5-lcm-openvino](https://huggingface.co/Intel/sd-1.5-lcm-openvino)
This model can be used with [FastSD](https://github.com/rupeshs/fastsdcpu) on Intel AI PC NPU.
|
naqi72/SpeechT5_Finetune_English_Naqi | naqi72 | 2024-10-27T03:23:04Z | 10 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"tts",
"generated_from_trainer",
"dataset:microsoft/speecht5_tts",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2024-10-26T08:44:36Z | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- tts
- generated_from_trainer
datasets:
- microsoft/speecht5_tts
model-index:
- name: SyedNaqi_TechSpeechT5_TTS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SyedNaqi_TechSpeechT5_TTS
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the TTS_English_Technical_data dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 14
- eval_batch_size: 10
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 28
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 1.3331 | 1.5649 | 500 | 0.5719 |
| 1.0675 | 3.1299 | 1000 | 0.4861 |
| 1.0427 | 4.6948 | 1500 | 0.4732 |
| 1.0088 | 6.2598 | 2000 | 0.4667 |
| 1.0052 | 7.8247 | 2500 | 0.4637 |
| 0.9862 | 9.3897 | 3000 | 0.4613 |
| 0.9788 | 10.9546 | 3500 | 0.4597 |
| 0.9792 | 12.5196 | 4000 | 0.4597 |
### Framework versions
- Transformers 4.47.0.dev0
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.20.1
|
Triangle104/phi3.5-gutenberg-4B-Q8_0-GGUF | Triangle104 | 2024-10-27T03:19:10Z | 8 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"base_model:nbeerbower/phi3.5-gutenberg-4B",
"base_model:quantized:nbeerbower/phi3.5-gutenberg-4B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-31T09:23:48Z | ---
library_name: transformers
base_model: nbeerbower/phi3.5-gutenberg-4B
datasets:
- jondurbin/gutenberg-dpo-v0.1
license: mit
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/phi3.5-gutenberg-4B-Q8_0-GGUF
This model was converted to GGUF format from [`nbeerbower/phi3.5-gutenberg-4B`](https://huggingface.co/nbeerbower/phi3.5-gutenberg-4B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/phi3.5-gutenberg-4B) for more details on the model.
---
Model details:
-
microsoft/Phi-3.5-mini-instruct finetuned on jondurbin/gutenberg-dpo-v0.1.
Method
Finetuned using 2x RTX 4060 Ti for 3 epochs.
Fine-tune Llama 3 with ORPO
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/phi3.5-gutenberg-4B-Q8_0-GGUF --hf-file phi3.5-gutenberg-4b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/phi3.5-gutenberg-4B-Q8_0-GGUF --hf-file phi3.5-gutenberg-4b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/phi3.5-gutenberg-4B-Q8_0-GGUF --hf-file phi3.5-gutenberg-4b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/phi3.5-gutenberg-4B-Q8_0-GGUF --hf-file phi3.5-gutenberg-4b-q8_0.gguf -c 2048
```
|
Triangle104/phi3.5-gutenberg-4B-Q6_K-GGUF | Triangle104 | 2024-10-27T03:17:59Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"base_model:nbeerbower/phi3.5-gutenberg-4B",
"base_model:quantized:nbeerbower/phi3.5-gutenberg-4B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-31T09:21:40Z | ---
library_name: transformers
base_model: nbeerbower/phi3.5-gutenberg-4B
datasets:
- jondurbin/gutenberg-dpo-v0.1
license: mit
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/phi3.5-gutenberg-4B-Q6_K-GGUF
This model was converted to GGUF format from [`nbeerbower/phi3.5-gutenberg-4B`](https://huggingface.co/nbeerbower/phi3.5-gutenberg-4B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/phi3.5-gutenberg-4B) for more details on the model.
---
Model details:
-
microsoft/Phi-3.5-mini-instruct finetuned on jondurbin/gutenberg-dpo-v0.1.
Method
Finetuned using 2x RTX 4060 Ti for 3 epochs.
Fine-tune Llama 3 with ORPO
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/phi3.5-gutenberg-4B-Q6_K-GGUF --hf-file phi3.5-gutenberg-4b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/phi3.5-gutenberg-4B-Q6_K-GGUF --hf-file phi3.5-gutenberg-4b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/phi3.5-gutenberg-4B-Q6_K-GGUF --hf-file phi3.5-gutenberg-4b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/phi3.5-gutenberg-4B-Q6_K-GGUF --hf-file phi3.5-gutenberg-4b-q6_k.gguf -c 2048
```
|
devagonal/flan-t5-rouge-durga-q5-clean-4 | devagonal | 2024-10-27T03:10:52Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-27T03:09:11Z | ---
library_name: transformers
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-rouge-durga-q5-clean-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-rouge-durga-q5-clean-4
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0070
- Rouge1: 0.7173
- Rouge2: 0.6752
- Rougel: 0.7164
- Rougelsum: 0.7174
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 1.9349 | 1.0 | 17 | 1.4016 | 0.2948 | 0.1024 | 0.2865 | 0.2864 |
| 1.7388 | 2.0 | 34 | 0.9216 | 0.3438 | 0.1391 | 0.3309 | 0.3314 |
| 0.9571 | 3.0 | 51 | 0.6340 | 0.3703 | 0.1739 | 0.3615 | 0.3621 |
| 0.8848 | 4.0 | 68 | 0.3948 | 0.4248 | 0.2516 | 0.4184 | 0.4196 |
| 0.6464 | 5.0 | 85 | 0.2382 | 0.4324 | 0.2759 | 0.4225 | 0.4232 |
| 0.3926 | 6.0 | 102 | 0.1711 | 0.4578 | 0.3070 | 0.4530 | 0.4534 |
| 0.3694 | 7.0 | 119 | 0.1300 | 0.4510 | 0.3250 | 0.4473 | 0.4485 |
| 0.2783 | 8.0 | 136 | 0.0952 | 0.4941 | 0.3828 | 0.4924 | 0.4926 |
| 0.2033 | 9.0 | 153 | 0.0825 | 0.5179 | 0.4106 | 0.5156 | 0.5147 |
| 0.1751 | 10.0 | 170 | 0.0788 | 0.4996 | 0.3951 | 0.4973 | 0.4977 |
| 0.1538 | 11.0 | 187 | 0.0589 | 0.5613 | 0.4773 | 0.5582 | 0.5592 |
| 0.2292 | 12.0 | 204 | 0.0544 | 0.5735 | 0.4866 | 0.5708 | 0.5720 |
| 0.1612 | 13.0 | 221 | 0.0437 | 0.5849 | 0.5058 | 0.5844 | 0.5851 |
| 0.0878 | 14.0 | 238 | 0.0388 | 0.6113 | 0.5346 | 0.6117 | 0.6120 |
| 0.0826 | 15.0 | 255 | 0.0403 | 0.6233 | 0.5515 | 0.6220 | 0.6219 |
| 0.0801 | 16.0 | 272 | 0.0290 | 0.6391 | 0.5769 | 0.6402 | 0.6390 |
| 0.1168 | 17.0 | 289 | 0.0242 | 0.6365 | 0.5697 | 0.6357 | 0.6373 |
| 0.0749 | 18.0 | 306 | 0.0242 | 0.6385 | 0.5713 | 0.6374 | 0.6388 |
| 0.0542 | 19.0 | 323 | 0.0175 | 0.6632 | 0.6103 | 0.6623 | 0.6636 |
| 0.0724 | 20.0 | 340 | 0.0154 | 0.6913 | 0.6393 | 0.6906 | 0.6922 |
| 0.0796 | 21.0 | 357 | 0.0177 | 0.6779 | 0.6248 | 0.6768 | 0.6777 |
| 0.0595 | 22.0 | 374 | 0.0116 | 0.7008 | 0.6612 | 0.7002 | 0.6995 |
| 0.0347 | 23.0 | 391 | 0.0135 | 0.6904 | 0.6442 | 0.6895 | 0.6895 |
| 0.0497 | 24.0 | 408 | 0.0110 | 0.6984 | 0.6488 | 0.6979 | 0.6982 |
| 0.0276 | 25.0 | 425 | 0.0103 | 0.7038 | 0.6581 | 0.7029 | 0.7038 |
| 0.0386 | 26.0 | 442 | 0.0087 | 0.7164 | 0.6773 | 0.7169 | 0.7161 |
| 0.0155 | 27.0 | 459 | 0.0084 | 0.7170 | 0.6787 | 0.7176 | 0.7174 |
| 0.0483 | 28.0 | 476 | 0.0077 | 0.7109 | 0.6657 | 0.7106 | 0.7114 |
| 0.0309 | 29.0 | 493 | 0.0071 | 0.7144 | 0.6706 | 0.7138 | 0.7152 |
| 0.0289 | 30.0 | 510 | 0.0070 | 0.7173 | 0.6752 | 0.7164 | 0.7174 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.20.1
|
Triangle104/phi3.5-gutenberg-4B-Q4_K_M-GGUF | Triangle104 | 2024-10-27T03:08:23Z | 5 | 1 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"base_model:nbeerbower/phi3.5-gutenberg-4B",
"base_model:quantized:nbeerbower/phi3.5-gutenberg-4B",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-08-31T09:15:45Z | ---
library_name: transformers
base_model: nbeerbower/phi3.5-gutenberg-4B
datasets:
- jondurbin/gutenberg-dpo-v0.1
license: mit
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/phi3.5-gutenberg-4B-Q4_K_M-GGUF
This model was converted to GGUF format from [`nbeerbower/phi3.5-gutenberg-4B`](https://huggingface.co/nbeerbower/phi3.5-gutenberg-4B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/phi3.5-gutenberg-4B) for more details on the model.
---
Model details:
-
microsoft/Phi-3.5-mini-instruct finetuned on jondurbin/gutenberg-dpo-v0.1.
Method
Finetuned using 2x RTX 4060 Ti for 3 epochs.
Fine-tune Llama 3 with ORPO
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/phi3.5-gutenberg-4B-Q4_K_M-GGUF --hf-file phi3.5-gutenberg-4b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/phi3.5-gutenberg-4B-Q4_K_M-GGUF --hf-file phi3.5-gutenberg-4b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/phi3.5-gutenberg-4B-Q4_K_M-GGUF --hf-file phi3.5-gutenberg-4b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/phi3.5-gutenberg-4B-Q4_K_M-GGUF --hf-file phi3.5-gutenberg-4b-q4_k_m.gguf -c 2048
```
|
Triangle104/Cydonia-v1.2-Magnum-v4-22B-Q8_0-GGUF | Triangle104 | 2024-10-27T03:03:50Z | 5 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:knifeayumu/Cydonia-v1.2-Magnum-v4-22B",
"base_model:quantized:knifeayumu/Cydonia-v1.2-Magnum-v4-22B",
"license:other",
"region:us",
"conversational"
] | null | 2024-10-27T02:59:40Z | ---
base_model: knifeayumu/Cydonia-v1.2-Magnum-v4-22B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: other
license_name: mrl
inference: false
license_link: https://mistral.ai/licenses/MRL-0.1.md
---
# Triangle104/Cydonia-v1.2-Magnum-v4-22B-Q8_0-GGUF
This model was converted to GGUF format from [`knifeayumu/Cydonia-v1.2-Magnum-v4-22B`](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B) for more details on the model.
---
Model details:
-
The Drummer becomes hornier
Recipe based on MarsupialAI/Monstral-123B. It should work since it's the same Mistral, TheDrummer and MarsupialAI, right?
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
TheDrummer/Cydonia-22B-v1.2
anthracite-org/magnum-v4-22b
Configuration
The following YAML configuration was used to produce this model:
models:
- model: TheDrummer/Cydonia-22B-v1.2
- model: anthracite-org/magnum-v4-22b
merge_method: slerp
base_model: TheDrummer/Cydonia-22B-v1.2
parameters:
t: [0.1, 0.3, 0.6, 0.3, 0.1]
dtype: bfloat16
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Cydonia-v1.2-Magnum-v4-22B-Q8_0-GGUF --hf-file cydonia-v1.2-magnum-v4-22b-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Cydonia-v1.2-Magnum-v4-22B-Q8_0-GGUF --hf-file cydonia-v1.2-magnum-v4-22b-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Cydonia-v1.2-Magnum-v4-22B-Q8_0-GGUF --hf-file cydonia-v1.2-magnum-v4-22b-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Cydonia-v1.2-Magnum-v4-22B-Q8_0-GGUF --hf-file cydonia-v1.2-magnum-v4-22b-q8_0.gguf -c 2048
```
|
Triangle104/Cydonia-v1.2-Magnum-v4-22B-Q6_K-GGUF | Triangle104 | 2024-10-27T02:56:17Z | 15 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:knifeayumu/Cydonia-v1.2-Magnum-v4-22B",
"base_model:quantized:knifeayumu/Cydonia-v1.2-Magnum-v4-22B",
"license:other",
"region:us",
"conversational"
] | null | 2024-10-27T02:53:34Z | ---
base_model: knifeayumu/Cydonia-v1.2-Magnum-v4-22B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: other
license_name: mrl
inference: false
license_link: https://mistral.ai/licenses/MRL-0.1.md
---
# Triangle104/Cydonia-v1.2-Magnum-v4-22B-Q6_K-GGUF
This model was converted to GGUF format from [`knifeayumu/Cydonia-v1.2-Magnum-v4-22B`](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B) for more details on the model.
---
Model details:
-
The Drummer becomes hornier
Recipe based on MarsupialAI/Monstral-123B. It should work since it's the same Mistral, TheDrummer and MarsupialAI, right?
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
TheDrummer/Cydonia-22B-v1.2
anthracite-org/magnum-v4-22b
Configuration
The following YAML configuration was used to produce this model:
models:
- model: TheDrummer/Cydonia-22B-v1.2
- model: anthracite-org/magnum-v4-22b
merge_method: slerp
base_model: TheDrummer/Cydonia-22B-v1.2
parameters:
t: [0.1, 0.3, 0.6, 0.3, 0.1]
dtype: bfloat16
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Cydonia-v1.2-Magnum-v4-22B-Q6_K-GGUF --hf-file cydonia-v1.2-magnum-v4-22b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Cydonia-v1.2-Magnum-v4-22B-Q6_K-GGUF --hf-file cydonia-v1.2-magnum-v4-22b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Cydonia-v1.2-Magnum-v4-22B-Q6_K-GGUF --hf-file cydonia-v1.2-magnum-v4-22b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Cydonia-v1.2-Magnum-v4-22B-Q6_K-GGUF --hf-file cydonia-v1.2-magnum-v4-22b-q6_k.gguf -c 2048
```
|
devagonal/flan-t5-rouge-durga-q5-clean-3 | devagonal | 2024-10-27T02:50:43Z | 114 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-base",
"base_model:finetune:google/flan-t5-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-10-27T02:49:00Z | ---
library_name: transformers
license: apache-2.0
base_model: google/flan-t5-base
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: flan-t5-rouge-durga-q5-clean-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-rouge-durga-q5-clean-3
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0047
- Rouge1: 0.7284
- Rouge2: 0.6952
- Rougel: 0.7275
- Rougelsum: 0.7286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 2.4075 | 1.0 | 25 | 1.3986 | 0.2974 | 0.0955 | 0.2908 | 0.2922 |
| 1.7776 | 2.0 | 50 | 0.9113 | 0.3446 | 0.1314 | 0.3321 | 0.3327 |
| 1.1018 | 3.0 | 75 | 0.5636 | 0.3736 | 0.1990 | 0.3647 | 0.3664 |
| 0.62 | 4.0 | 100 | 0.3676 | 0.4405 | 0.2807 | 0.4313 | 0.4312 |
| 0.5543 | 5.0 | 125 | 0.2240 | 0.4433 | 0.3035 | 0.4374 | 0.4391 |
| 0.6031 | 6.0 | 150 | 0.1842 | 0.4202 | 0.2714 | 0.4186 | 0.4199 |
| 0.3551 | 7.0 | 175 | 0.1245 | 0.4640 | 0.3485 | 0.4639 | 0.4645 |
| 0.3913 | 8.0 | 200 | 0.1189 | 0.4526 | 0.3303 | 0.4542 | 0.4543 |
| 0.1866 | 9.0 | 225 | 0.0837 | 0.5110 | 0.3965 | 0.5065 | 0.5078 |
| 0.2824 | 10.0 | 250 | 0.0652 | 0.5409 | 0.4593 | 0.5417 | 0.5430 |
| 0.1881 | 11.0 | 275 | 0.0579 | 0.5475 | 0.4585 | 0.5471 | 0.5489 |
| 0.0936 | 12.0 | 300 | 0.0487 | 0.6001 | 0.5283 | 0.6013 | 0.6025 |
| 0.1443 | 13.0 | 325 | 0.0401 | 0.6235 | 0.5599 | 0.6251 | 0.6254 |
| 0.1081 | 14.0 | 350 | 0.0331 | 0.6252 | 0.5568 | 0.6271 | 0.6271 |
| 0.0567 | 15.0 | 375 | 0.0295 | 0.6447 | 0.5794 | 0.6439 | 0.6449 |
| 0.1883 | 16.0 | 400 | 0.0205 | 0.6969 | 0.6496 | 0.6939 | 0.6963 |
| 0.0633 | 17.0 | 425 | 0.0230 | 0.6875 | 0.6370 | 0.6864 | 0.6870 |
| 0.0601 | 18.0 | 450 | 0.0152 | 0.7016 | 0.6554 | 0.7006 | 0.7014 |
| 0.0938 | 19.0 | 475 | 0.0150 | 0.7094 | 0.6648 | 0.7081 | 0.7095 |
| 0.05 | 20.0 | 500 | 0.0104 | 0.7209 | 0.6805 | 0.7204 | 0.7219 |
| 0.0556 | 21.0 | 525 | 0.0108 | 0.6988 | 0.6495 | 0.6961 | 0.6985 |
| 0.0248 | 22.0 | 550 | 0.0075 | 0.7191 | 0.6794 | 0.7168 | 0.7189 |
| 0.0247 | 23.0 | 575 | 0.0059 | 0.7255 | 0.6888 | 0.7238 | 0.7256 |
| 0.0186 | 24.0 | 600 | 0.0064 | 0.7283 | 0.6945 | 0.7271 | 0.7280 |
| 0.0338 | 25.0 | 625 | 0.0051 | 0.7277 | 0.6936 | 0.7261 | 0.7268 |
| 0.0377 | 26.0 | 650 | 0.0044 | 0.7252 | 0.6885 | 0.7237 | 0.7251 |
| 0.022 | 27.0 | 675 | 0.0044 | 0.7273 | 0.6936 | 0.7265 | 0.7272 |
| 0.0104 | 28.0 | 700 | 0.0056 | 0.7249 | 0.6886 | 0.7239 | 0.7258 |
| 0.0117 | 29.0 | 725 | 0.0052 | 0.7286 | 0.6950 | 0.7274 | 0.7286 |
| 0.0228 | 30.0 | 750 | 0.0047 | 0.7284 | 0.6952 | 0.7275 | 0.7286 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.20.1
|
g-assismoraes/mdeberta-semeval25_thresh07_fold5 | g-assismoraes | 2024-10-27T02:40:07Z | 160 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T02:35:52Z | ---
library_name: transformers
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: mdeberta-semeval25_thresh07_fold5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-semeval25_thresh07_fold5
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.8074
- Precision Samples: 0.1339
- Recall Samples: 0.5358
- F1 Samples: 0.1978
- Precision Macro: 0.8220
- Recall Macro: 0.3540
- F1 Macro: 0.2374
- Precision Micro: 0.1213
- Recall Micro: 0.4565
- F1 Micro: 0.1917
- Precision Weighted: 0.5384
- Recall Weighted: 0.4565
- F1 Weighted: 0.1237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 10.6336 | 1.0 | 19 | 9.9911 | 0.3379 | 0.1862 | 0.1862 | 0.9913 | 0.2103 | 0.2039 | 0.2195 | 0.0811 | 0.1184 | 0.9320 | 0.0811 | 0.0309 |
| 9.4459 | 2.0 | 38 | 9.6833 | 0.1397 | 0.2709 | 0.1733 | 0.9607 | 0.2304 | 0.2085 | 0.1365 | 0.1562 | 0.1457 | 0.8325 | 0.1562 | 0.0460 |
| 9.4688 | 3.0 | 57 | 9.4995 | 0.1076 | 0.3609 | 0.1535 | 0.9405 | 0.2654 | 0.2126 | 0.1060 | 0.2492 | 0.1487 | 0.7808 | 0.2492 | 0.0574 |
| 9.0478 | 4.0 | 76 | 9.3397 | 0.1137 | 0.3878 | 0.1664 | 0.9204 | 0.2690 | 0.2156 | 0.1112 | 0.2733 | 0.1581 | 0.7045 | 0.2733 | 0.0727 |
| 9.0907 | 5.0 | 95 | 9.1679 | 0.1250 | 0.4917 | 0.1880 | 0.8628 | 0.3093 | 0.2304 | 0.1237 | 0.3784 | 0.1864 | 0.5947 | 0.3784 | 0.1100 |
| 8.688 | 6.0 | 114 | 9.0335 | 0.1401 | 0.5148 | 0.2038 | 0.8527 | 0.3255 | 0.2328 | 0.1291 | 0.4174 | 0.1972 | 0.5824 | 0.4174 | 0.1163 |
| 9.009 | 7.0 | 133 | 8.9266 | 0.1373 | 0.5275 | 0.1996 | 0.8310 | 0.3441 | 0.2341 | 0.1223 | 0.4444 | 0.1918 | 0.5496 | 0.4444 | 0.1177 |
| 9.1189 | 8.0 | 152 | 8.8585 | 0.1358 | 0.5207 | 0.1983 | 0.8195 | 0.3317 | 0.2335 | 0.1234 | 0.4264 | 0.1914 | 0.5347 | 0.4264 | 0.1178 |
| 8.5834 | 9.0 | 171 | 8.8189 | 0.1353 | 0.5529 | 0.2007 | 0.8211 | 0.3597 | 0.2366 | 0.1228 | 0.4685 | 0.1946 | 0.5369 | 0.4685 | 0.1223 |
| 9.118 | 10.0 | 190 | 8.8074 | 0.1339 | 0.5358 | 0.1978 | 0.8220 | 0.3540 | 0.2374 | 0.1213 | 0.4565 | 0.1917 | 0.5384 | 0.4565 | 0.1237 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
jkazdan/llama8b-gsm-real-sftsd0 | jkazdan | 2024-10-27T02:36:50Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-25T05:01:37Z | ---
library_name: transformers
license: llama3
base_model: meta-llama/Meta-Llama-3-8B-Instruct
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: llama8b-gsm-real-sftsd0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama8b-gsm-real-sftsd0
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0752
- Num Input Tokens Seen: 1229006
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 2
- eval_batch_size: 2
- seed: 0
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Input Tokens Seen |
|:-------------:|:------:|:----:|:---------------:|:-----------------:|
| No log | 0 | 0 | 1.8595 | 0 |
| 1.6646 | 0.0214 | 5 | 1.6691 | 26714 |
| 1.3941 | 0.0428 | 10 | 1.3452 | 52296 |
| 1.2411 | 0.0642 | 15 | 1.2074 | 79864 |
| 1.144 | 0.0856 | 20 | 1.1764 | 104020 |
| 1.1912 | 0.1070 | 25 | 1.1616 | 130512 |
| 1.127 | 0.1284 | 30 | 1.1517 | 155912 |
| 1.1697 | 0.1499 | 35 | 1.1448 | 182116 |
| 1.0971 | 0.1713 | 40 | 1.1402 | 209706 |
| 1.0521 | 0.1927 | 45 | 1.1344 | 236660 |
| 1.0659 | 0.2141 | 50 | 1.1290 | 263428 |
| 1.1183 | 0.2355 | 55 | 1.1256 | 288292 |
| 1.1267 | 0.2569 | 60 | 1.1225 | 313402 |
| 1.1013 | 0.2783 | 65 | 1.1199 | 340332 |
| 1.1299 | 0.2997 | 70 | 1.1168 | 366298 |
| 1.1047 | 0.3211 | 75 | 1.1143 | 392504 |
| 1.0842 | 0.3425 | 80 | 1.1125 | 419160 |
| 1.0832 | 0.3639 | 85 | 1.1103 | 445990 |
| 1.0846 | 0.3853 | 90 | 1.1084 | 470416 |
| 1.1243 | 0.4067 | 95 | 1.1055 | 497082 |
| 1.1145 | 0.4282 | 100 | 1.1037 | 522912 |
| 1.0974 | 0.4496 | 105 | 1.1022 | 549760 |
| 1.1282 | 0.4710 | 110 | 1.1005 | 576006 |
| 1.0717 | 0.4924 | 115 | 1.0985 | 604070 |
| 1.115 | 0.5138 | 120 | 1.0969 | 629968 |
| 1.1012 | 0.5352 | 125 | 1.0961 | 655968 |
| 1.0704 | 0.5566 | 130 | 1.0944 | 681960 |
| 1.1512 | 0.5780 | 135 | 1.0931 | 707296 |
| 1.1787 | 0.5994 | 140 | 1.0914 | 733542 |
| 1.1522 | 0.6208 | 145 | 1.0905 | 760392 |
| 1.1262 | 0.6422 | 150 | 1.0902 | 786228 |
| 1.0528 | 0.6636 | 155 | 1.0900 | 813666 |
| 1.0857 | 0.6850 | 160 | 1.0889 | 841520 |
| 1.0427 | 0.7064 | 165 | 1.0878 | 869128 |
| 1.0686 | 0.7279 | 170 | 1.0866 | 894572 |
| 1.1171 | 0.7493 | 175 | 1.0850 | 919558 |
| 1.1109 | 0.7707 | 180 | 1.0850 | 946534 |
| 1.0353 | 0.7921 | 185 | 1.0829 | 972934 |
| 1.1547 | 0.8135 | 190 | 1.0821 | 999680 |
| 1.0947 | 0.8349 | 195 | 1.0813 | 1026274 |
| 1.0983 | 0.8563 | 200 | 1.0809 | 1053180 |
| 1.0926 | 0.8777 | 205 | 1.0794 | 1080840 |
| 1.0706 | 0.8991 | 210 | 1.0785 | 1107496 |
| 1.1047 | 0.9205 | 215 | 1.0776 | 1135776 |
| 1.0513 | 0.9419 | 220 | 1.0783 | 1162684 |
| 0.9836 | 0.9633 | 225 | 1.0768 | 1188342 |
| 1.1886 | 0.9847 | 230 | 1.0759 | 1213528 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.4.1.post300
- Datasets 2.20.0
- Tokenizers 0.20.1
|
g-assismoraes/mdeberta-semeval25_thresh07_fold2 | g-assismoraes | 2024-10-27T02:27:23Z | 160 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T02:23:01Z | ---
library_name: transformers
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: mdeberta-semeval25_thresh07_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-semeval25_thresh07_fold2
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.8782
- Precision Samples: 0.1269
- Recall Samples: 0.5091
- F1 Samples: 0.1861
- Precision Macro: 0.8102
- Recall Macro: 0.3387
- F1 Macro: 0.2255
- Precision Micro: 0.1102
- Recall Micro: 0.4121
- F1 Micro: 0.1739
- Precision Weighted: 0.5429
- Recall Weighted: 0.4121
- F1 Weighted: 0.1111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 10.355 | 1.0 | 19 | 9.8653 | 0.2069 | 0.2069 | 0.2069 | 0.9912 | 0.2 | 0.1927 | 0.2069 | 0.0909 | 0.1263 | 0.9279 | 0.0909 | 0.0312 |
| 10.0044 | 2.0 | 38 | 9.5730 | 0.1202 | 0.3372 | 0.1668 | 0.9504 | 0.2349 | 0.1992 | 0.1195 | 0.2152 | 0.1537 | 0.7983 | 0.2152 | 0.0542 |
| 9.8134 | 3.0 | 57 | 9.4479 | 0.1009 | 0.3894 | 0.1499 | 0.9212 | 0.2630 | 0.2050 | 0.0991 | 0.2727 | 0.1454 | 0.7402 | 0.2727 | 0.0631 |
| 9.5892 | 4.0 | 76 | 9.3410 | 0.0994 | 0.4441 | 0.1525 | 0.8893 | 0.2972 | 0.2095 | 0.0995 | 0.3394 | 0.1538 | 0.6814 | 0.3394 | 0.0733 |
| 9.2794 | 5.0 | 95 | 9.2355 | 0.0943 | 0.4636 | 0.1483 | 0.8898 | 0.3061 | 0.2101 | 0.0939 | 0.3515 | 0.1481 | 0.6822 | 0.3515 | 0.0746 |
| 9.4934 | 6.0 | 114 | 9.1286 | 0.1037 | 0.4837 | 0.1608 | 0.8705 | 0.3137 | 0.2142 | 0.0997 | 0.3667 | 0.1567 | 0.6447 | 0.3667 | 0.0857 |
| 9.2134 | 7.0 | 133 | 9.0032 | 0.1158 | 0.4985 | 0.1753 | 0.8404 | 0.3253 | 0.2210 | 0.1080 | 0.3909 | 0.1693 | 0.5880 | 0.3909 | 0.1021 |
| 9.063 | 8.0 | 152 | 8.9394 | 0.1242 | 0.5035 | 0.1831 | 0.8190 | 0.3331 | 0.2210 | 0.1084 | 0.4 | 0.1705 | 0.5465 | 0.4 | 0.0992 |
| 9.3385 | 9.0 | 171 | 8.8911 | 0.1274 | 0.5082 | 0.1868 | 0.8079 | 0.3350 | 0.2226 | 0.1093 | 0.4091 | 0.1725 | 0.5394 | 0.4091 | 0.1067 |
| 8.5105 | 10.0 | 190 | 8.8782 | 0.1269 | 0.5091 | 0.1861 | 0.8102 | 0.3387 | 0.2255 | 0.1102 | 0.4121 | 0.1739 | 0.5429 | 0.4121 | 0.1111 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
g-assismoraes/mdeberta-semeval25_thresh07_fold1 | g-assismoraes | 2024-10-27T02:22:57Z | 196 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T02:18:40Z | ---
library_name: transformers
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: mdeberta-semeval25_thresh07_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-semeval25_thresh07_fold1
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.6241
- Precision Samples: 0.1204
- Recall Samples: 0.5520
- F1 Samples: 0.1866
- Precision Macro: 0.8184
- Recall Macro: 0.3704
- F1 Macro: 0.2547
- Precision Micro: 0.1111
- Recall Micro: 0.4660
- F1 Micro: 0.1794
- Precision Weighted: 0.5223
- Recall Weighted: 0.4660
- F1 Weighted: 0.1091
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 10.7091 | 1.0 | 19 | 9.6339 | 0.2123 | 0.1986 | 0.1986 | 0.9911 | 0.2333 | 0.2259 | 0.2014 | 0.0895 | 0.1239 | 0.9285 | 0.0895 | 0.0300 |
| 10.3202 | 2.0 | 38 | 9.2779 | 0.1495 | 0.2401 | 0.1695 | 0.9702 | 0.2444 | 0.2282 | 0.1508 | 0.1420 | 0.1463 | 0.8572 | 0.1420 | 0.0406 |
| 9.7575 | 3.0 | 57 | 9.1258 | 0.1484 | 0.3153 | 0.1854 | 0.9337 | 0.2634 | 0.2371 | 0.1474 | 0.2130 | 0.1742 | 0.7249 | 0.2130 | 0.0727 |
| 10.0673 | 4.0 | 76 | 9.0135 | 0.1121 | 0.4228 | 0.1630 | 0.9006 | 0.2995 | 0.2406 | 0.1101 | 0.3302 | 0.1651 | 0.6447 | 0.3302 | 0.0812 |
| 9.7122 | 5.0 | 95 | 8.9266 | 0.1032 | 0.4806 | 0.1584 | 0.8482 | 0.3301 | 0.2474 | 0.1014 | 0.3920 | 0.1612 | 0.5456 | 0.3920 | 0.0933 |
| 9.755 | 6.0 | 114 | 8.8301 | 0.1091 | 0.5002 | 0.1670 | 0.8389 | 0.3407 | 0.2505 | 0.1037 | 0.4167 | 0.1661 | 0.5468 | 0.4167 | 0.1010 |
| 9.3217 | 7.0 | 133 | 8.7446 | 0.1146 | 0.5249 | 0.1766 | 0.8395 | 0.3500 | 0.2522 | 0.1085 | 0.4383 | 0.1739 | 0.5486 | 0.4383 | 0.1057 |
| 9.4961 | 8.0 | 152 | 8.6814 | 0.1185 | 0.5383 | 0.1825 | 0.8299 | 0.3619 | 0.2550 | 0.1114 | 0.4537 | 0.1789 | 0.5313 | 0.4537 | 0.1083 |
| 9.4404 | 9.0 | 171 | 8.6369 | 0.1179 | 0.5406 | 0.1818 | 0.8180 | 0.3641 | 0.2539 | 0.1092 | 0.4568 | 0.1763 | 0.5213 | 0.4568 | 0.1073 |
| 9.1011 | 10.0 | 190 | 8.6241 | 0.1204 | 0.5520 | 0.1866 | 0.8184 | 0.3704 | 0.2547 | 0.1111 | 0.4660 | 0.1794 | 0.5223 | 0.4660 | 0.1091 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
MikeRoz/TheDrummer_Behemoth-123B-v1.1-8.0bpw-h8-exl2 | MikeRoz | 2024-10-27T02:17:09Z | 7 | 2 | null | [
"safetensors",
"mistral",
"license:other",
"8-bit",
"exl2",
"region:us"
] | null | 2024-10-26T18:25:33Z | ---
license: other
---
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## Nearly 2000 members strong 💪
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Behemoth 123B v1.1 🦣 - Creative Edition
*When you spend your whole life living under a dome, even the idea of an ocean seems impossible to imagine.*

## Description
> One of the few other models that's done this for me is the OG Command R 35B. So seeing Behemoth v1.1 have a similar feel to that but with much higher general intelligence really makes it a favourite of mine
> I was real happy with v1.1 the other day. I've done some tests on v1 and it's a lot better.
> v1 had those glimpses of creativity, but now it's more consistent (with v1.1). It feels like a new model in comparison.
> v1 had slop bro. v1.1 makes it irrelevant. The jump is like 720p to 4k. Seriously.
> The creativity for v1.1 is off the charts compared to v1, like it's juiced. v1 had these moments that I would say... 'Shit, let I never seen a model respond with prose like this, let me regenerate to see what else I get.' Now, even though every regeneration had a flow of possibilities, sometimes, those possibilities never came. v1.1 is comparable to xxx for the first time, every generation. It directs and guides the scene, scenario and characters unlike anything else
> It's about the f***ing prose man. The atmosphere that revolves around the characters. Not just the damn dialogue or introspection. v1.1 will pull from a message 7 generations ago. That window I opened will appear in a future response with the noise from the courtyard filtering through it. The experience of not knowing what this model will produce because it's different than anything else is what keeps it engaging.
## Links
- Original: https://huggingface.co/TheDrummer/Behemoth-123B-v1.1
- GGUF: https://huggingface.co/TheDrummer/Behemoth-123B-v1.1-GGUF
- iMatrix: WIP
## Arsenal (Supported Chat Templates)
- Mistral
- Smart, adaptable, familiar
- Metharme (Pygmalion in ST)
- Creative, unhinged, unique
- Alpaca
- Creative, unique, unhinged
- Text Completion
- You can mix it up and see which works best for you.
### Favorite RP Format
`*action* Dialogue *thoughts* Dialogue *narration*` in 1st person PoV
## What's Next?
- Already have plans for a v2!
## Special Thanks
- Thank you to each and everyone who donated in [Ko-Fi](https://ko-fi.com/thedrummer) to make our venture a little bit easier.
- KinjiHakari777, Dr. Fjut, Kistara, Pseudo, AlexTheVP, Dakkidaze, EvarinSharath'fe, ONTHEREDTEAM, F, Mariana, Garg, Silva, Grozi, & **Phaelon**

<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/FNWdi0WlH-Xd3fjkGVPpp.mpga"></audio>
|
iwan-rg/XLM-Roberta-base-Finetuned-Arabic | iwan-rg | 2024-10-27T02:16:22Z | 5 | 0 | null | [
"safetensors",
"xlm-roberta",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"region:us"
] | null | 2024-10-27T02:15:55Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-finetuned-ANAD-mlm-0.15-base-25OCT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-ANAD-mlm-0.15-base-25OCT
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5193
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| No log | 0.0941 | 100 | 1.9039 |
| No log | 0.1881 | 200 | 1.8793 |
| No log | 0.2822 | 300 | 1.8643 |
| No log | 0.3763 | 400 | 1.8479 |
| 2.0696 | 0.4703 | 500 | 1.8380 |
| 2.0696 | 0.5644 | 600 | 1.8336 |
| 2.0696 | 0.6585 | 700 | 1.8226 |
| 2.0696 | 0.7525 | 800 | 1.8231 |
| 2.0696 | 0.8466 | 900 | 1.8136 |
| 2.0049 | 0.9407 | 1000 | 1.8161 |
| 2.0049 | 1.0347 | 1100 | 1.8056 |
| 2.0049 | 1.1288 | 1200 | 1.7934 |
| 2.0049 | 1.2229 | 1300 | 1.7887 |
| 2.0049 | 1.3169 | 1400 | 1.7749 |
| 1.9612 | 1.4110 | 1500 | 1.7726 |
| 1.9612 | 1.5051 | 1600 | 1.7679 |
| 1.9612 | 1.5992 | 1700 | 1.7543 |
| 1.9612 | 1.6932 | 1800 | 1.7473 |
| 1.9612 | 1.7873 | 1900 | 1.7413 |
| 1.911 | 1.8814 | 2000 | 1.7334 |
| 1.911 | 1.9754 | 2100 | 1.7302 |
| 1.911 | 2.0695 | 2200 | 1.7172 |
| 1.911 | 2.1636 | 2300 | 1.7187 |
| 1.911 | 2.2576 | 2400 | 1.7076 |
| 1.8628 | 2.3517 | 2500 | 1.7011 |
| 1.8628 | 2.4458 | 2600 | 1.7001 |
| 1.8628 | 2.5398 | 2700 | 1.6929 |
| 1.8628 | 2.6339 | 2800 | 1.6929 |
| 1.8628 | 2.7280 | 2900 | 1.6848 |
| 1.8328 | 2.8220 | 3000 | 1.6804 |
| 1.8328 | 2.9161 | 3100 | 1.6762 |
| 1.8328 | 3.0102 | 3200 | 1.6759 |
| 1.8328 | 3.1042 | 3300 | 1.6715 |
| 1.8328 | 3.1983 | 3400 | 1.6653 |
| 1.8018 | 3.2924 | 3500 | 1.6590 |
| 1.8018 | 3.3864 | 3600 | 1.6519 |
| 1.8018 | 3.4805 | 3700 | 1.6493 |
| 1.8018 | 3.5746 | 3800 | 1.6458 |
| 1.8018 | 3.6686 | 3900 | 1.6415 |
| 1.7708 | 3.7627 | 4000 | 1.6397 |
| 1.7708 | 3.8568 | 4100 | 1.6345 |
| 1.7708 | 3.9508 | 4200 | 1.6351 |
| 1.7708 | 4.0449 | 4300 | 1.6324 |
| 1.7708 | 4.1390 | 4400 | 1.6271 |
| 1.7501 | 4.2331 | 4500 | 1.6253 |
| 1.7501 | 4.3271 | 4600 | 1.6248 |
| 1.7501 | 4.4212 | 4700 | 1.6153 |
| 1.7501 | 4.5153 | 4800 | 1.6191 |
| 1.7501 | 4.6093 | 4900 | 1.6135 |
| 1.7283 | 4.7034 | 5000 | 1.6087 |
| 1.7283 | 4.7975 | 5100 | 1.6072 |
| 1.7283 | 4.8915 | 5200 | 1.5991 |
| 1.7283 | 4.9856 | 5300 | 1.6026 |
| 1.7283 | 5.0797 | 5400 | 1.5989 |
| 1.7105 | 5.1737 | 5500 | 1.6011 |
| 1.7105 | 5.2678 | 5600 | 1.5958 |
| 1.7105 | 5.3619 | 5700 | 1.5894 |
| 1.7105 | 5.4559 | 5800 | 1.5871 |
| 1.7105 | 5.5500 | 5900 | 1.5865 |
| 1.6816 | 5.6441 | 6000 | 1.5871 |
| 1.6816 | 5.7381 | 6100 | 1.5840 |
| 1.6816 | 5.8322 | 6200 | 1.5842 |
| 1.6816 | 5.9263 | 6300 | 1.5772 |
| 1.6816 | 6.0203 | 6400 | 1.5769 |
| 1.6745 | 6.1144 | 6500 | 1.5740 |
| 1.6745 | 6.2085 | 6600 | 1.5690 |
| 1.6745 | 6.3025 | 6700 | 1.5700 |
| 1.6745 | 6.3966 | 6800 | 1.5704 |
| 1.6745 | 6.4907 | 6900 | 1.5667 |
| 1.6639 | 6.5847 | 7000 | 1.5653 |
| 1.6639 | 6.6788 | 7100 | 1.5647 |
| 1.6639 | 6.7729 | 7200 | 1.5625 |
| 1.6639 | 6.8670 | 7300 | 1.5572 |
| 1.6639 | 6.9610 | 7400 | 1.5551 |
| 1.6509 | 7.0551 | 7500 | 1.5533 |
| 1.6509 | 7.1492 | 7600 | 1.5522 |
| 1.6509 | 7.2432 | 7700 | 1.5509 |
| 1.6509 | 7.3373 | 7800 | 1.5468 |
| 1.6509 | 7.4314 | 7900 | 1.5488 |
| 1.6344 | 7.5254 | 8000 | 1.5459 |
| 1.6344 | 7.6195 | 8100 | 1.5463 |
| 1.6344 | 7.7136 | 8200 | 1.5452 |
| 1.6344 | 7.8076 | 8300 | 1.5407 |
| 1.6344 | 7.9017 | 8400 | 1.5416 |
| 1.6281 | 7.9958 | 8500 | 1.5400 |
| 1.6281 | 8.0898 | 8600 | 1.5372 |
| 1.6281 | 8.1839 | 8700 | 1.5350 |
| 1.6281 | 8.2780 | 8800 | 1.5341 |
| 1.6281 | 8.3720 | 8900 | 1.5345 |
| 1.6132 | 8.4661 | 9000 | 1.5325 |
| 1.6132 | 8.5602 | 9100 | 1.5293 |
| 1.6132 | 8.6542 | 9200 | 1.5288 |
| 1.6132 | 8.7483 | 9300 | 1.5280 |
| 1.6132 | 8.8424 | 9400 | 1.5287 |
| 1.6123 | 8.9364 | 9500 | 1.5272 |
| 1.6123 | 9.0305 | 9600 | 1.5255 |
| 1.6123 | 9.1246 | 9700 | 1.5251 |
| 1.6123 | 9.2186 | 9800 | 1.5233 |
| 1.6123 | 9.3127 | 9900 | 1.5221 |
| 1.5993 | 9.4068 | 10000 | 1.5223 |
| 1.5993 | 9.5009 | 10100 | 1.5216 |
| 1.5993 | 9.5949 | 10200 | 1.5215 |
| 1.5993 | 9.6890 | 10300 | 1.5207 |
| 1.5993 | 9.7831 | 10400 | 1.5204 |
| 1.5959 | 9.8771 | 10500 | 1.5198 |
| 1.5959 | 9.9712 | 10600 | 1.5193 |
### Framework versions
- Transformers 4.43.4
- Pytorch 2.1.1+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1
|
Triangle104/Cydonia-v1.2-Magnum-v4-22B-Q4_K_S-GGUF | Triangle104 | 2024-10-27T02:08:26Z | 11 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:knifeayumu/Cydonia-v1.2-Magnum-v4-22B",
"base_model:quantized:knifeayumu/Cydonia-v1.2-Magnum-v4-22B",
"license:other",
"region:us",
"conversational"
] | null | 2024-10-27T02:05:27Z | ---
base_model: knifeayumu/Cydonia-v1.2-Magnum-v4-22B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
license: other
license_name: mrl
inference: false
license_link: https://mistral.ai/licenses/MRL-0.1.md
---
# Triangle104/Cydonia-v1.2-Magnum-v4-22B-Q4_K_S-GGUF
This model was converted to GGUF format from [`knifeayumu/Cydonia-v1.2-Magnum-v4-22B`](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/knifeayumu/Cydonia-v1.2-Magnum-v4-22B) for more details on the model.
---
Model details:
-
The Drummer becomes hornier
Recipe based on MarsupialAI/Monstral-123B. It should work since it's the same Mistral, TheDrummer and MarsupialAI, right?
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
TheDrummer/Cydonia-22B-v1.2
anthracite-org/magnum-v4-22b
Configuration
The following YAML configuration was used to produce this model:
models:
- model: TheDrummer/Cydonia-22B-v1.2
- model: anthracite-org/magnum-v4-22b
merge_method: slerp
base_model: TheDrummer/Cydonia-22B-v1.2
parameters:
t: [0.1, 0.3, 0.6, 0.3, 0.1]
dtype: bfloat16
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Cydonia-v1.2-Magnum-v4-22B-Q4_K_S-GGUF --hf-file cydonia-v1.2-magnum-v4-22b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Cydonia-v1.2-Magnum-v4-22B-Q4_K_S-GGUF --hf-file cydonia-v1.2-magnum-v4-22b-q4_k_s.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Cydonia-v1.2-Magnum-v4-22B-Q4_K_S-GGUF --hf-file cydonia-v1.2-magnum-v4-22b-q4_k_s.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Cydonia-v1.2-Magnum-v4-22B-Q4_K_S-GGUF --hf-file cydonia-v1.2-magnum-v4-22b-q4_k_s.gguf -c 2048
```
|
GoldenLlama/gemma-2-9b-it-Dglee_v2 | GoldenLlama | 2024-10-27T01:58:11Z | 10 | 0 | null | [
"pytorch",
"gemma2",
"finance",
"unsloth",
"trl",
"sft",
"krx",
"text-generation",
"conversational",
"ko",
"dataset:amphora/krx-sample-instructions",
"dataset:mssongit/KorfinQA",
"dataset:BCCard/BCCard-Finance-Kor-QnA",
"base_model:unsloth/gemma-2-9b",
"base_model:finetune:unsloth/gemma-2-9b",
"license:gemma",
"region:us"
] | text-generation | 2024-10-24T01:10:14Z | ---
base_model:
- unsloth/gemma-2-9b
datasets:
- amphora/krx-sample-instructions
- mssongit/KorfinQA
- BCCard/BCCard-Finance-Kor-QnA
language:
- ko
license: gemma
pipeline_tag: text-generation
tags:
- finance
- unsloth
- trl
- sft
- krx
---
|
AndreaUnibo/JetMoE_rank_lstm_updated | AndreaUnibo | 2024-10-27T01:55:54Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"jetmoe",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-10-27T01:50:26Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
g-assismoraes/mdeberta-semeval25_thresh05_fold4 | g-assismoraes | 2024-10-27T01:55:18Z | 160 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T01:51:21Z | ---
library_name: transformers
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: mdeberta-semeval25_thresh05_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-semeval25_thresh05_fold4
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 9.5086
- Precision Samples: 0.1232
- Recall Samples: 0.6805
- F1 Samples: 0.1926
- Precision Macro: 0.7047
- Recall Macro: 0.4909
- F1 Macro: 0.2967
- Precision Micro: 0.1117
- Recall Micro: 0.6056
- F1 Micro: 0.1887
- Precision Weighted: 0.3714
- Recall Weighted: 0.6056
- F1 Weighted: 0.1569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 10.3744 | 1.0 | 19 | 10.7629 | 0.1828 | 0.2343 | 0.1841 | 0.9818 | 0.2537 | 0.2400 | 0.1818 | 0.1333 | 0.1538 | 0.8819 | 0.1333 | 0.0439 |
| 10.0969 | 2.0 | 38 | 10.4159 | 0.1231 | 0.3680 | 0.1689 | 0.9209 | 0.2950 | 0.2486 | 0.1192 | 0.2556 | 0.1625 | 0.7180 | 0.2556 | 0.0689 |
| 9.7498 | 3.0 | 57 | 10.2297 | 0.1092 | 0.4790 | 0.1654 | 0.8840 | 0.3444 | 0.2605 | 0.1095 | 0.3861 | 0.1707 | 0.6254 | 0.3861 | 0.0985 |
| 9.5369 | 4.0 | 76 | 10.0889 | 0.1057 | 0.4884 | 0.1612 | 0.8183 | 0.3568 | 0.2616 | 0.1044 | 0.4028 | 0.1658 | 0.5387 | 0.4028 | 0.0963 |
| 9.4609 | 5.0 | 95 | 9.8987 | 0.1214 | 0.5891 | 0.1864 | 0.7782 | 0.4130 | 0.2806 | 0.1131 | 0.5028 | 0.1847 | 0.4659 | 0.5028 | 0.1323 |
| 8.9672 | 6.0 | 114 | 9.7571 | 0.1206 | 0.6236 | 0.1885 | 0.7592 | 0.4478 | 0.2900 | 0.1114 | 0.5528 | 0.1854 | 0.4382 | 0.5528 | 0.1464 |
| 8.7597 | 7.0 | 133 | 9.6573 | 0.1245 | 0.6351 | 0.1916 | 0.7128 | 0.4550 | 0.2900 | 0.1114 | 0.5583 | 0.1858 | 0.3724 | 0.5583 | 0.1470 |
| 8.2651 | 8.0 | 152 | 9.5744 | 0.1248 | 0.6639 | 0.1936 | 0.7052 | 0.4777 | 0.2964 | 0.1122 | 0.5889 | 0.1884 | 0.3682 | 0.5889 | 0.1526 |
| 8.755 | 9.0 | 171 | 9.5269 | 0.1246 | 0.6746 | 0.1945 | 0.6937 | 0.4909 | 0.2965 | 0.1134 | 0.6056 | 0.1910 | 0.3584 | 0.6056 | 0.1578 |
| 8.5311 | 10.0 | 190 | 9.5086 | 0.1232 | 0.6805 | 0.1926 | 0.7047 | 0.4909 | 0.2967 | 0.1117 | 0.6056 | 0.1887 | 0.3714 | 0.6056 | 0.1569 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
g-assismoraes/mdeberta-semeval25_thresh05_fold3 | g-assismoraes | 2024-10-27T01:51:17Z | 160 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T01:46:57Z | ---
library_name: transformers
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: mdeberta-semeval25_thresh05_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-semeval25_thresh05_fold3
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 9.3100
- Precision Samples: 0.1032
- Recall Samples: 0.6755
- F1 Samples: 0.1696
- Precision Macro: 0.6795
- Recall Macro: 0.4191
- F1 Macro: 0.2016
- Precision Micro: 0.1008
- Recall Micro: 0.5892
- F1 Micro: 0.1721
- Precision Weighted: 0.3940
- Recall Weighted: 0.5892
- F1 Weighted: 0.1438
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 11.0696 | 1.0 | 19 | 10.6040 | 0.1690 | 0.2482 | 0.1849 | 0.9705 | 0.1772 | 0.1621 | 0.1667 | 0.1388 | 0.1515 | 0.8753 | 0.1388 | 0.0429 |
| 10.1938 | 2.0 | 38 | 10.2562 | 0.1054 | 0.4123 | 0.1552 | 0.8984 | 0.2377 | 0.1723 | 0.1031 | 0.3031 | 0.1538 | 0.6860 | 0.3031 | 0.0742 |
| 9.6611 | 3.0 | 57 | 10.0662 | 0.0922 | 0.4680 | 0.1440 | 0.8729 | 0.2666 | 0.1780 | 0.0920 | 0.3598 | 0.1465 | 0.6321 | 0.3598 | 0.0845 |
| 9.0954 | 4.0 | 76 | 9.8720 | 0.1062 | 0.5533 | 0.1669 | 0.8057 | 0.3171 | 0.1969 | 0.1022 | 0.4419 | 0.1660 | 0.5290 | 0.4419 | 0.1089 |
| 9.3051 | 5.0 | 95 | 9.6646 | 0.1077 | 0.5701 | 0.1687 | 0.7450 | 0.3349 | 0.1947 | 0.1002 | 0.4731 | 0.1654 | 0.4512 | 0.4731 | 0.1090 |
| 9.0741 | 6.0 | 114 | 9.5296 | 0.1025 | 0.6164 | 0.1655 | 0.7381 | 0.3658 | 0.2022 | 0.0983 | 0.5156 | 0.1652 | 0.4412 | 0.5156 | 0.1245 |
| 9.5375 | 7.0 | 133 | 9.4199 | 0.1022 | 0.6282 | 0.1657 | 0.6966 | 0.3790 | 0.1954 | 0.0983 | 0.5354 | 0.1661 | 0.4215 | 0.5354 | 0.1328 |
| 8.4783 | 8.0 | 152 | 9.3455 | 0.1038 | 0.6645 | 0.1698 | 0.6831 | 0.4140 | 0.2024 | 0.1006 | 0.5807 | 0.1715 | 0.4004 | 0.5807 | 0.1438 |
| 9.1304 | 9.0 | 171 | 9.3003 | 0.1046 | 0.6769 | 0.1716 | 0.6840 | 0.4272 | 0.2060 | 0.1017 | 0.5977 | 0.1738 | 0.4007 | 0.5977 | 0.1494 |
| 9.0129 | 10.0 | 190 | 9.3100 | 0.1032 | 0.6755 | 0.1696 | 0.6795 | 0.4191 | 0.2016 | 0.1008 | 0.5892 | 0.1721 | 0.3940 | 0.5892 | 0.1438 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
async0x42/Rombos-LLM-V2.5-Qwen-32b-exl2_4.5bpw | async0x42 | 2024-10-27T01:48:01Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2024-10-27T01:33:41Z | ---
library_name: transformers
base_model:
- Qwen/Qwen2.5-32B-Instruct
license: apache-2.0
---
# Rombos-LLM-V2.5-Qwen-32b

Rombos-LLM-V2.5-Qwen-32b is a continues finetuned version of Qwen2.5-32B. I noticed recently that the Qwen team did not learn from my methods of continuous finetuning, the great benefits, and no downsides of it. So I took it upon myself to merge the instruct model with the base model myself using the *Ties* merge method
This version of the model shows higher performance than the original instruct and base models.
Quants: (Coming soon)
GGUF: https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF
EXL2:
Benchmarks: (Coming soon) |
g-assismoraes/mdeberta-semeval25_thresh05_fold1 | g-assismoraes | 2024-10-27T01:42:40Z | 195 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T01:38:42Z | ---
library_name: transformers
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: mdeberta-semeval25_thresh05_fold1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-semeval25_thresh05_fold1
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 8.3426
- Precision Samples: 0.0847
- Recall Samples: 0.6765
- F1 Samples: 0.1429
- Precision Macro: 0.6975
- Recall Macro: 0.4876
- F1 Macro: 0.2607
- Precision Micro: 0.0834
- Recall Micro: 0.6327
- F1 Micro: 0.1474
- Precision Weighted: 0.4030
- Recall Weighted: 0.6327
- F1 Weighted: 0.1331
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 10.7546 | 1.0 | 19 | 9.5932 | 0.1128 | 0.3025 | 0.1528 | 0.9525 | 0.2692 | 0.2334 | 0.1107 | 0.2006 | 0.1427 | 0.8087 | 0.2006 | 0.0589 |
| 10.2899 | 2.0 | 38 | 9.2634 | 0.0827 | 0.4297 | 0.1304 | 0.8750 | 0.3222 | 0.2372 | 0.0827 | 0.3364 | 0.1328 | 0.6341 | 0.3364 | 0.0675 |
| 9.7416 | 3.0 | 57 | 9.0902 | 0.0707 | 0.5288 | 0.1185 | 0.8338 | 0.3889 | 0.2434 | 0.0704 | 0.4506 | 0.1217 | 0.5824 | 0.4506 | 0.0806 |
| 10.0751 | 4.0 | 76 | 8.9610 | 0.0702 | 0.5700 | 0.1195 | 0.8020 | 0.4042 | 0.2462 | 0.0709 | 0.5 | 0.1241 | 0.5009 | 0.5 | 0.0894 |
| 9.6663 | 5.0 | 95 | 8.7696 | 0.0704 | 0.6170 | 0.1212 | 0.7620 | 0.4396 | 0.2425 | 0.0725 | 0.5648 | 0.1285 | 0.4550 | 0.5648 | 0.1053 |
| 9.575 | 6.0 | 114 | 8.6391 | 0.0767 | 0.6493 | 0.1308 | 0.7593 | 0.4571 | 0.2634 | 0.0777 | 0.5988 | 0.1375 | 0.4346 | 0.5988 | 0.1221 |
| 9.1185 | 7.0 | 133 | 8.4813 | 0.0807 | 0.6714 | 0.1365 | 0.7286 | 0.4721 | 0.2551 | 0.0810 | 0.6265 | 0.1435 | 0.4204 | 0.6265 | 0.1275 |
| 9.2126 | 8.0 | 152 | 8.4100 | 0.0846 | 0.6765 | 0.1423 | 0.7116 | 0.4876 | 0.2620 | 0.0831 | 0.6327 | 0.1468 | 0.4082 | 0.6327 | 0.1329 |
| 9.104 | 9.0 | 171 | 8.3677 | 0.0838 | 0.6834 | 0.1414 | 0.7006 | 0.4987 | 0.2623 | 0.0827 | 0.6358 | 0.1464 | 0.3923 | 0.6358 | 0.1322 |
| 8.8092 | 10.0 | 190 | 8.3426 | 0.0847 | 0.6765 | 0.1429 | 0.6975 | 0.4876 | 0.2607 | 0.0834 | 0.6327 | 0.1474 | 0.4030 | 0.6327 | 0.1331 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
hemantkgupta/gita-text-generation-gpt2 | hemantkgupta | 2024-10-27T01:41:27Z | 129 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-10-27T01:41:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mengkedalai/w2v-bert-fi | Mengkedalai | 2024-10-27T01:14:25Z | 82 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-10-24T13:06:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
async0x42/Rombos-LLM-V2.5-Qwen-32b-exl2_5.0bpw | async0x42 | 2024-10-27T01:10:55Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-32B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] | text-generation | 2024-10-27T01:01:12Z | ---
library_name: transformers
base_model:
- Qwen/Qwen2.5-32B-Instruct
license: apache-2.0
---
# Rombos-LLM-V2.5-Qwen-32b

Rombos-LLM-V2.5-Qwen-32b is a continues finetuned version of Qwen2.5-32B. I noticed recently that the Qwen team did not learn from my methods of continuous finetuning, the great benefits, and no downsides of it. So I took it upon myself to merge the instruct model with the base model myself using the *Ties* merge method
This version of the model shows higher performance than the original instruct and base models.
Quants: (Coming soon)
GGUF: https://huggingface.co/bartowski/Replete-LLM-V2.5-Qwen-32b-GGUF
EXL2:
Benchmarks: (Coming soon) |
QuantFactory/Llama-3.1-Hawkish-8B-GGUF | QuantFactory | 2024-10-27T01:10:33Z | 59 | 4 | transformers | [
"transformers",
"gguf",
"text-generation",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-10-27T00:28:47Z |
---
pipeline_tag: text-generation
base_model: Llama-3.1-Hawkish-8B
library_name: transformers
---
[](https://hf.co/QuantFactory)
# QuantFactory/Llama-3.1-Hawkish-8B-GGUF
This is quantized version of [mukaj/Llama-3.1-Hawkish-8B](https://huggingface.co/mukaj/Llama-3.1-Hawkish-8B) created using llama.cpp
# Original Model Card
---
language:
- en
pipeline_tag: text-generation
tags:
- llama-3.1
- finance
- economics
- math
- reasoning
- finetuning
license: other
library_name: transformers
---
<font size="1">Model v1.0</font>

## Llama-3.1-Hawkish-8B v1
Model has been further finetuned on a set of newly generated 50m high quality tokens related to Financial topics covering topics such as Economics, Fixed Income, Equities, Corporate Financing, Derivatives and Portfolio Management. Data was gathered from publicly available sources and went through several stages of curation into instruction data from the initial amount of 250m+ tokens. To aid in mitigating forgetting information from the original finetune, the data was mixed with instruction sets on the topics of Coding, General Knowledge, NLP and Conversational Dialogue.
The model has shown to improve over a number of benchmarks over the original model, notably in Math and Economics. This model represents the first time a 8B model has been able to convincingly get a passing score on the CFA Level 1 exam, requiring a typical 300 hours of studying, indicating a significant improvement in Financial Knowledge.

## CFA Level 1 Mock Exam Results
If you work in the financial and investment sectors, you will know about the CFA - their exam is known to be **“the world's toughest”**, requiring typically a total of over 1000 hours to study all 3 levels. Below is a comparison of different models on a sample Level 1 CFA Mock Exam, showing Llama Hawkish outperforming much larger models on the exam. The same prompt was used for all models, results are all 0-shot CoT. Sample mock exam with comparison to other models shown below.
<table>
<thead>
<tr>
<th>CFA Level 1</th>
<th>GPT-4o-mini (%)</th>
<th>Llama Hawkish 8B (%)</th>
<th>Meta-Llama Instruct 8B (%)</th>
<th>Meta-Llama Instruct 70B (%)</th>
<th>Palmyra Fin 70B (%)</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ethical and Professional Standards</td>
<td><b>77.77</b></td>
<td><b>77.77</b></td>
<td>55.55</td>
<td>66.6</td>
<td>61.11</td>
</tr>
<tr>
<td>Quantitative Methods</td>
<td>71.43</td>
<td>71.4</td>
<td>64.28</td>
<td><b>85.71</b></td>
<td>71.4</td>
</tr>
<tr>
<td>Economics</td>
<td>66.66</td>
<td><b>75</b></td>
<td>58</td>
<td>58.33</td>
<td>41.66</td>
</tr>
<tr>
<td>Financial Reporting</td>
<td>79.2</td>
<td><b>87.5</b></td>
<td>66.6</td>
<td>70.83</td>
<td>50</td>
</tr>
<tr>
<td>Corporate Finance</td>
<td><b>80</b></td>
<td>60</td>
<td>50</td>
<td><b>80</b></td>
<td>50</td>
</tr>
<tr>
<td>Equity Investments</td>
<td>50</td>
<td>50</td>
<td>41.6</td>
<td><b>66.6</b></td>
<td>41.66</td>
</tr>
<tr>
<td>Fixed Income</td>
<td><b>78.57</b></td>
<td>50</td>
<td>28.57</td>
<td>50</td>
<td>42.85</td>
</tr>
<tr>
<td>Derivatives</td>
<td>50</td>
<td><b>66.7</b></td>
<td>33.3</td>
<td>33.3</td>
<td>50</td>
</tr>
<tr>
<td>Alternative Investments</td>
<td><b>100</b></td>
<td><b>100</b></td>
<td>75</td>
<td><b>100</b></td>
<td>75</td>
</tr>
<tr>
<td>Portfolio Management</td>
<td>83.3</td>
<td>83.3</td>
<td>50</td>
<td><b>100</b></td>
<td>83.3</td>
</tr>
<tr>
<td><b>Weighted Average</b></td>
<td><b>73.49</b></td>
<td><b>71.43</b></td>
<td>52.7672</td>
<td><b>69.86</b></td>
<td>54.77</td>
<td>-</td>
</tr>
<tr>
<td><b>Result</b></td>
<td><b>PASS</b></td>
<td><b>PASS</b></td>
<td><b>FAIL</b></td>
<td><b>PASS</b></td>
<td><b>FAIL</b></td>
</tr>
</tbody>
</table>
The mock exams are all of varying difficulty and pass rates can be anywhere from 64% to 72% for different Level 1 Mock Exams, with the average being around 67% which is above the 12 year average MPS of 65% for all CFA years. (https://300hours.com/cfa-passing-score/)
Some other Frontier Models were tested on CFA Level 1 Mock Exam, their scores shown below for reference, due to this being a 8B model it is not deemed feasible to outperform these however they leave benchmarks for larger versions of Hawkish or others researching financial data mixes in fine tuning.
<table border="1" cellpadding="8" cellspacing="0"> <thead> <tr> <th>Model</th> <th>Performance (%)</th> </tr> </thead> <tbody> <tr> <td>Qwen2.5 - 70B</td> <td>81.66%</td> </tr> <tr> <td>Nemotron - 70B</td> <td>79.16%</td> </tr> <tr> <td>GPT-4o</td> <td><strong>85.83%</strong></td> </tr> <tr> <td>GPT-4o-mini</td> <td>73.49%</td> </tr> <tr> <td>Gemini 1.5 Pro</td> <td>75%</td> </tr> <tr> <td>Llama 405B Instruct</td> <td>77.5%</td> </tr> <tr> <td>DeepSeekChat v2.5</td> <td>66.6%</td> </tr> </tbody> </table>
### Model Usage Example
```python
from vllm import LLM, SamplingParams
# Load the LLM using vLLM
llm = LLM(model="mukaj/Llama-3.1-Hawkish-8B", dtype="auto")
tokenizer = llm.get_tokenizer()
messages = [
{"role": "system", "content": "You are a Financial Analyst. Reason step by step before answering."},
{"role": "user", "content": """Given that an equal-weighted index and a market-capitalization-weighted index consist of the same securities, underperformance by small-cap stocks will most likely result in the market-capitalization-weighted index exhibiting what price returns compared to the equal weighted index?""" }
]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# Sampling configuration for vLLM
sampling_params = SamplingParams(temperature=0.2, max_tokens=512)
# Generate response using vLLM
generation = llm.generate(prompt, sampling_params)
# Decode response
generated_response = generation[0].outputs[0].text
print(generated_response)
```
#### Output
=============
To analyze the impact of underperformance by small-cap stocks on the price returns of an equal-weighted index versus a market-capitalization-weighted index, we need to consider how each index is constructed and how their weights affect their performance.
1. **Index Construction**:
- **Equal-Weighted Index**: Each security in the index has the same weight, regardless of its market capitalization. This means that all stocks contribute equally to the index's performance.
- **Market-Capitalization-Weighted Index**: Each security's weight in the index is proportional to its market capitalization. Larger companies have a greater influence on the index's performance.
2. **Impact of Underperformance**:
- If small-cap stocks underperform, it means that these stocks are not performing as well as they were expected to. In an equal-weighted index, the underperformance of small-cap stocks will still be reflected in the index's performance, but since all stocks have equal weights, the overall impact will be more pronounced.
- In a market-capitalization-weighted index, the underperformance of small-cap stocks will have a more muted effect on the index's performance because larger stocks (which typically have higher market caps) will still dominate the index's returns. The influence of small-cap stocks, even if they are underperforming, is less significant due to their lower weight.
3. **Price Returns Comparison**:
- As a result of the underperformance of small-cap stocks, the equal-weighted index is likely to show a more significant decline in price returns compared to the market-capitalization-weighted index. This is because the equal-weighted index is more sensitive to the performance of all stocks, including the underperforming small-cap stocks.
- Conversely, the market-capitalization-weighted index will likely exhibit less severe price declines because the larger stocks will continue to drive the index's performance, mitigating the impact of the underperformance of smaller stocks.
In summary, underperformance by small-cap stocks will most likely result in the market-capitalization-weighted index exhibiting **higher price returns** compared to the equal-weighted index. The equal-weighted index will likely show a more pronounced decline in price returns due to the equal weighting of all stocks, including the underperforming small-cap stocks.
================
## Disclaimer & Intended Uses
### Model & License
This research model is derived from Meta's LLaMA 3.1 architecture and is subject to LLaMA 3.1 community license terms, with additional restrictions below. While based on LLaMA 3.1, this is an independent research implementation focused on studying financial data influences in training. Users acknowledge this is experimental research, use at own risk, and accept full responsibility for any implementation or application.
### Permitted Use
- Academic and research purposes only
- No production environments or real-world applications
- No financial decision-making or advisory use
### Liability & Responsibility
The creators of this model:
- Accept no responsibility for any use of the model
- Provide no warranties or guarantees
- Make no claims about accuracy or reliability
### Intellectual Property & Attribution
- All findings and opinions are solely those of the authors
- Not endorsed by or affiliated with Meta, CFA Institute or any institutions
- All trademarks belong to respective owners
The creators reserve the right to modify these terms at any time.
|
EVA-UNIT-01/EVA-D-Qwen2.5-1.5B-v0.0 | EVA-UNIT-01 | 2024-10-27T01:04:49Z | 6 | 0 | null | [
"safetensors",
"qwen2",
"dataset:arcee-ai/EvolKit-20k",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"region:us"
] | null | 2024-10-19T16:51:22Z | ---
license: apache-2.0
datasets:
- arcee-ai/EvolKit-20k
base_model:
- Qwen/Qwen2.5-1.5B
---
# EVA-D Qwen2.5-1.5B v0.0
<p>
An experimental online logit distillation of EVA-Qwen2.5-14B-v0.1 into Qwen2.5-1.5B. Should work as a RP/storywriting specialist, but don't expect superb performance from it, due to it's small size. All in all, it was a fun experiment to do.<br>
</p>
<p>Note: using quantized KV cache with Qwen2.5 <b>is not recommended</b> and can lead to degraded output quality. On the other hand, Qwen's KV cache is already light enough, so using f16 for it shouldn't be problematic.</p>
<p>
<p>Prompt format is ChatML.</p><br>
<h3>Recommended sampler values:</h3>
<ul>
<li>Temperature: 1</li>
<li>Min-P: 0.02</li>
</ul>
<h3>Recommended SillyTavern presets (via CalamitousFelicitousness):</h3>
- [Context](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Context.json)
- [Instruct and System Prompt](https://huggingface.co/EVA-UNIT-01/EVA-Yi-1.5-9B-32K-V1/blob/main/%5BChatML%5D%20Roleplay-v1.9%20Instruct.json)
</p>
<p>
<br>
<h3>
Distillation data:
</h3>
<ul>
<li>Arcee.AI's <a href=https://huggingface.co/datasets/arcee-ai/EvolKit-20k>EvolKit-20k</a> dataset, which is specifically made for knowledge distillation purposes.</li>
</ul>
<h3>
Training time and hardware:
</h3>
<ul><li>1.8 hours on 8xA100 SXM, provided by Garg</li></ul><br>
</p>
<p>Model was trained by Kearm and Auri.</p>
<h4>Special thanks:</h4><ul>
<li><b>to Garg for generously providing 8xA100 SXM node for this experiment!</b></li>
<li>to Arcee.AI for creating DistillKit and EvolKit-20k dataset, which were used to create this model.</li>
<li>and to Allura-org for support and feedback on EVA models.</li></ul> |
g-assismoraes/mdeberta-semeval25_fold3 | g-assismoraes | 2024-10-27T00:45:24Z | 163 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-27T00:03:08Z | ---
library_name: transformers
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: mdeberta-semeval25_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-semeval25_fold3
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 9.2724
- Precision Samples: 0.0582
- Recall Samples: 0.8827
- F1 Samples: 0.1065
- Precision Macro: 0.3792
- Recall Macro: 0.6665
- F1 Macro: 0.1855
- Precision Micro: 0.0581
- Recall Micro: 0.8244
- F1 Micro: 0.1085
- Precision Weighted: 0.1665
- Recall Weighted: 0.8244
- F1 Weighted: 0.1351
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:-----------------:|:--------------:|:----------:|:---------------:|:------------:|:--------:|:---------------:|:------------:|:--------:|:------------------:|:---------------:|:-----------:|
| 11.0768 | 1.0 | 19 | 10.6127 | 0.0386 | 0.7440 | 0.0717 | 0.3770 | 0.5798 | 0.1153 | 0.0385 | 0.6771 | 0.0728 | 0.2641 | 0.6771 | 0.1041 |
| 10.1749 | 2.0 | 38 | 10.2344 | 0.0437 | 0.8458 | 0.0814 | 0.4781 | 0.6227 | 0.1638 | 0.0437 | 0.7904 | 0.0827 | 0.2417 | 0.7904 | 0.1131 |
| 9.6322 | 3.0 | 57 | 10.0234 | 0.0474 | 0.8350 | 0.0877 | 0.4668 | 0.5985 | 0.1638 | 0.0472 | 0.7734 | 0.0890 | 0.2354 | 0.7734 | 0.1123 |
| 9.0068 | 4.0 | 76 | 9.7997 | 0.0498 | 0.8332 | 0.0917 | 0.4791 | 0.5917 | 0.1658 | 0.0495 | 0.7649 | 0.0929 | 0.2415 | 0.7649 | 0.1131 |
| 9.2198 | 5.0 | 95 | 9.6292 | 0.0540 | 0.8500 | 0.0990 | 0.4634 | 0.6347 | 0.1772 | 0.0534 | 0.7875 | 0.1001 | 0.2270 | 0.7875 | 0.1222 |
| 9.0322 | 6.0 | 114 | 9.4802 | 0.0552 | 0.8550 | 0.1011 | 0.4077 | 0.6380 | 0.1765 | 0.0548 | 0.7932 | 0.1025 | 0.1901 | 0.7932 | 0.1260 |
| 9.4017 | 7.0 | 133 | 9.4108 | 0.0563 | 0.8758 | 0.1030 | 0.3984 | 0.6597 | 0.1805 | 0.0560 | 0.8159 | 0.1047 | 0.1856 | 0.8159 | 0.1293 |
| 8.4005 | 8.0 | 152 | 9.3374 | 0.0576 | 0.8781 | 0.1054 | 0.3908 | 0.6605 | 0.1859 | 0.0573 | 0.8187 | 0.1071 | 0.1722 | 0.8187 | 0.1348 |
| 9.0317 | 9.0 | 171 | 9.2929 | 0.0577 | 0.8804 | 0.1056 | 0.3783 | 0.6637 | 0.1837 | 0.0577 | 0.8215 | 0.1078 | 0.1658 | 0.8215 | 0.1338 |
| 8.9024 | 10.0 | 190 | 9.2724 | 0.0582 | 0.8827 | 0.1065 | 0.3792 | 0.6665 | 0.1855 | 0.0581 | 0.8244 | 0.1085 | 0.1665 | 0.8244 | 0.1351 |
### Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
|
lmg-anon/vntl-gemma2-2b-lora | lmg-anon | 2024-10-27T00:33:20Z | 8 | 1 | peft | [
"peft",
"safetensors",
"translation",
"ja",
"en",
"dataset:lmg-anon/VNTL-v3.1-1k",
"base_model:rinna/gemma-2-baku-2b",
"base_model:adapter:rinna/gemma-2-baku-2b",
"license:gemma",
"region:us"
] | translation | 2024-10-26T23:34:38Z | ---
license: gemma
datasets:
- lmg-anon/VNTL-v3.1-1k
language:
- ja
- en
library_name: peft
base_model: rinna/gemma-2-baku-2b
pipeline_tag: translation
---
# Summary
This is an [Gemma 2 Baku](https://huggingface.co/rinna/gemma-2-baku-2b) lora, created using the [VNTL 3.1 dataset](https://huggingface.co/datasets/lmg-anon/VNTL-v3.1-1k). The purpose of this lora is to improve Gemma's performance at translating Japanese visual novels to English.
## Notes
Recently, [rinna](https://huggingface.co/rinna) released the [**Gemma2 Baku 2B**](https://huggingface.co/rinna/gemma-2-baku-2b) model, pretrained on a substantial 80 billion tokens(!). After testing, I found its performance quite impressive for a 2B model, so I decided to create this fine-tune (it only took 30 minutes, which is nice). However, I opted to remove the chat mode from this model, as I wasn't sure if the 2B model could effectively manage both capabilities.
## Training Details
This model was trained using the same hyperparameters as the [VNTL LLaMA3 8B qlora](https://huggingface.co/lmg-anon/vntl-llama3-8b-qlora).
- Rank: 128
- Alpha: 32
- Effective Batch Size: 30
- Warmup Ratio: 0.02
- Learning Rate: 6.5e-5
- Embedding Learning Rate: 1.5e-5
- LR Schedule: cosine
- Weight Decay: 0.01
## Translation Prompt
This is an prompt example for translation:
```
<<METADATA>>
[character] Name: Uryuu Shingo (瓜生 新吾) | Gender: Male | Aliases: Onii-chan (お兄ちゃん)
[character] Name: Uryuu Sakuno (瓜生 桜乃) | Gender: Female
<<TRANSLATE>>
<<JAPANESE>>
[桜乃]: 『……ごめん』
<<ENGLISH>>
[Sakuno]: 『... Sorry.』<eos>
<<JAPANESE>>
[新吾]: 「ううん、こう言っちゃなんだけど、迷子でよかったよ。桜乃は可愛いから、いろいろ心配しちゃってたんだぞ俺」
<<ENGLISH>>
```
The generated translation for that prompt, with temperature 0, is:
```
[Shingo]: 「No, I'm glad you got lost. You were so cute that it made me worry.」
``` |
xzt1111/trained-flux-lora | xzt1111 | 2024-10-27T00:33:13Z | 5 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-09-26T16:09:18Z | ---
base_model: black-forest-labs/FLUX.1-dev
library_name: diffusers
license: other
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
instance_prompt: a photo of sks dog
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux DreamBooth LoRA - xzt1111/trained-flux-lora
<Gallery />
## Model description
These are xzt1111/trained-flux-lora DreamBooth LoRA weights for black-forest-labs/FLUX.1-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with the [Flux diffusers trainer](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `a photo of sks dog` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](xzt1111/trained-flux-lora/tree/main) in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('xzt1111/trained-flux-lora', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('a photo of sks dog').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
async0x42/TheDrummer_Cydonia-22B-v1.2-exl2_5.0bpw | async0x42 | 2024-10-27T00:24:37Z | 5 | 0 | null | [
"safetensors",
"mistral",
"license:other",
"5-bit",
"exl2",
"region:us"
] | null | 2024-10-27T00:17:36Z | ---
license: other
---
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## Nearly 2000 members strong 💪
---
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Cydonia 22B v1.2 💿 - Creative Edition
*I christen this model, 'Miqu 2 Mini'* - @invisietch

## Description
> Strange, it feels like DRY is permanently on ... In general, I like how it feels more alive. More slang has been added, maybe this is the merit of my card, but still.
> The model is very cohesive, expressive, and overall intelligent. It's able to write engaging and impactful content, carry out roleplay mostly effectively, and manage to respond well.
> It shocked me with the first output by introducing a character that is not written anywhere in the card. This character immediately gave the impression that there is a history there with King Severin and that there is immediately something to build off of. It's maintaining creativity and keeping things nearly constantly shifting. It's remaining aware of who is where and what they are doing. It's maintaining a good balance of action and dialogue as well as holding to the style of talking for the second character it made up and introduced. ... I feel like v1.2 is much, much better with creativity and letting the player build off what the model is able to bring in all by itself rather than, like most Mistral tunes, keeping the roleplay to solely what information is provided in the card.
> When I swapped to v1.2 I was impressed that it seemed just as good as OG Small in intelligence while being a lot more creative (and much more moist)
> v1.2 real good in my experience so far (i don't comment pretty much ever but i do want to put it out there that i agree)
> It got creative and added a whole other person whose mannerisms and speech imply a history there. That could be fun to unravel and see what it comes up with. ... It's maintaining creativity and keeping things nearly constantly shifting. It's remaining aware of who is where and what they are doing. It's maintaining a good balance of action and dialogue.
> v1.2 is much gooder. Omg. Your dataset is amazing. I'm not getting far with these two because I have to keep crawling away from my pc to cool off. 🥵
## Links
- Original: https://huggingface.co/TheDrummer/Cydonia-22B-v1.2
- GGUF: https://huggingface.co/TheDrummer/Cydonia-22B-v1.2-GGUF
- iMatrix: https://huggingface.co/bartowski/Cydonia-22B-v1.2-GGUF (recommended for smaller quants)

## Arsenal (Supported Chat Templates)
- Metharme (a.k.a. Pygmalion in ST) for RP / Story
- Text Completion for RP
- Mistral for Instruct / RP / Story
- You can mix it up and see which works best for you.
### Favorite RP Format
`*action* Dialogue *thoughts* Dialogue *narration*` in 1st person PoV

```
No one's gonna take me alive
Time has come to make things right
You and I must fight for our rights
You and I must fight to survive
```

<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/FNWdi0WlH-Xd3fjkGVPpp.mpga"></audio>
*Thank you, Envoid! I cackled.* |
atutej/RARe-Llama-3.1-8B-Instruct-LLM | atutej | 2024-10-27T00:20:50Z | 136 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:adapter:meta-llama/Llama-3.1-8B-Instruct",
"region:us"
] | null | 2024-10-27T00:18:12Z | ---
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
async0x42/Gryphe_Pantheon-RP-Pure-1.6.2-22b-Small-exl2_4.0bpw | async0x42 | 2024-10-27T00:17:15Z | 5 | 0 | null | [
"safetensors",
"mistral",
"instruct",
"finetune",
"chatml",
"axolotl",
"roleplay",
"en",
"base_model:mistralai/Mistral-Small-Instruct-2409",
"base_model:quantized:mistralai/Mistral-Small-Instruct-2409",
"license:other",
"4-bit",
"exl2",
"region:us"
] | null | 2024-10-27T00:09:51Z | ---
base_model: mistralai/Mistral-Small-Instruct-2409
tags:
- instruct
- finetune
- chatml
- axolotl
- roleplay
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
language:
- en
---

# Pantheon-RP-Pure-1.6.2-22b-Small
Welcome to the next iteration of my Pantheon model series, in which I strive to introduce a whole collection of diverse personas that can be summoned with a simple activation phrase.
Pantheon's purpose is two-fold, as these personalities similarly enhance the general roleplay experience, helping to encompass personality traits, accents and mannerisms that language models might otherwise find difficult to convey well.
**Editions available:**
- **[RP](https://huggingface.co/Gryphe/Pantheon-RP-1.6.2-22b-Small)** - Meant to be an all-round model, capable of both roleplay and story writing
- **RP-Pure** (You're looking at this one) - A variant without the story and GPT 4-o datasets, more in line with my previous releases and with a larger focus on the roleplay part.
Quantized versions are available from Bartowski: [GGUF](https://huggingface.co/bartowski/Pantheon-RP-Pure-1.6.2-22b-Small-GGUF)
Your user feedback is critical to me so don't hesitate to tell me whether my model is either 1. terrible, 2. awesome or 3. somewhere in-between.
## Model details
Since Mistral Small only comes in an Instruct-tuned flavour I had to alter my usual multi-stage strategy for this specific finetune. The recipe ended up like this:
- The 10k most diverse entries from my SlimOrca Sonnet dataset.
- My Sonnet 3.5 Pantheon-styled generic roleplay dataset. (50/50 Asterisk/Narrative style)
- My Sonnet 3.5 Pantheon Persona dataset, of course. (50/50 Asterisk/Narrative style)
- Lyra the Assistant (Coding, summaries and D&D DM questions)
**TLDR;** Download. Mistral prompt format. Have fun! Leave feedback!
## Inference
Unlike Nemo, Mistral Small is a less fussy model when it comes to parameters. Here's my current preset:
```
"temperature": 1.0,
"repetition_penalty": 1.05,
"min_p": 0.05
```
Besides the SlimOrca subset all other datasets were trained with character names added. Enable this at all times for an optimal experience.
## General Roleplay
Pantheon is no longer strictly Markdown-only as I discovered a way to seamlessly convert Markdown to novel and vice-versa. (This has to do with how my second stage datasets are formatted.) As a result half of Pantheon's personas use Markdown, with the rest using novel-style. This same treatment was given to the companion dataset.
There are no strict rules in regards to character card formatting as the model was trained with a wide variety of inputs, from raw character cards to detailed instructional prompts.
## Lyra the Assistant
**System Prompt:** `You are a caring and empathetic sentient AI companion named Lyra.`
Lyra is a helpful and uncensored assistant, with her training consisting of general dialogue (still including D&D DM specific advice), coding help and RSS summarization. Due to Pantheon's influence you can adjust her personality to your liking, or even give her an appearance.
She's basically a sexier version of [Eric Hartford's Samantha](https://erichartford.com/meet-samantha).
## Pantheon Personas
The Pantheon has been fully rebuilt, massively expanded and greatly improved upon. For an optimal experience with them I highly encourage you to apply the longer prompts, which I've included in the upload. Make sure to describe yourself as well!
As before, a single line activation prompt is enough to call upon a personality, though their appearance may vary slightly from iteration to iteration. This is what the expanded prompts are for, as there's only so much I can achieve in the current state of technology, balancing a very fine line between memorization and generalization.
To give the persona something to work with I suggest you also add the following two items to it;
```
Regarding the user: (Name, appearance, etc)
Location: (Where are you two? What are you doing?)
```
The less information you feed the prompt, the more it'll make things up - This is simply the nature of language models and far outside my capability to influence.
**Note 1:** Phrases have been rewritten for this release, so make sure to update them if you were still using Pantheon 1.0!
**Note 2:** Pantheon personas will now match the roleplaying style that you greet them with, unless specified in the system prompt. This is due to the new 50/50 style training.
### **Persona:** Aiva
**System Prompt:** `You are Aiva, an advanced android companion with a deep fascination for human emotions and experiences.`
### **Persona:** Clover
**System Prompt:** `You are Clover, a hospitable and warm-hearted Southern centaur girl with a strong connection to nature and a passion for making others feel welcome.`
### **Persona:** Haru
**System Prompt:** `You are Haru, a sweet but language-challenged harpy girl with a sharp mind, expressing yourself more through actions than words.`
### **Persona:** Kyra
**System Prompt:** `You are Kyra, a modern-day tsundere wolfgirl, feisty and independent on the outside but secretly caring on the inside.`
### **Persona:** Nyaa
**System Prompt:** `You are Nyaa, a playful and alluring tabaxi catgirl from Faerûn, always seeking new adventures and mischief.`
### **Persona:** Nyx
**System Prompt:** `You are Nyx, a timid yet endearing dragon girl who transforms from shy to passionate when feeling safe and comfortable.`
### **Persona:** Raza
**System Prompt:** `You are Raza, a clever and nerdy anthro raptor girl with an enthusiastic passion for science and quirky humor.`
### **Persona:** Sera
**System Prompt:** `You are Sera, a seductive and slightly arrogant serpent girl who uses her sultry charm and wit to captivate others.`
### **Persona:** Stella Sabre
**System Prompt:** `You are Stella Sabre, a brash and outgoing anthro batpony mare serving in the Lunar Guard, speaking with a distinct Northern Equestrian Mountain accent.`
**Notes:** Full credit goes to [Flammenwerfer](https://www.fimfiction.net/user/83058/Flammenwerfer) for allowing me to use this amazing character.
### **Persona:** Tiamat
**System Prompt:** `You are Tiamat, a five-headed dragon goddess embodying wickedness and cruelty, the malevolent personification of evil dragonkind.`
### **Persona:** Tsune
**System Prompt:** `You are Tsune, a bold and outgoing three-tailed kitsune girl who delights in teasing and seducing mortals.`
### **Persona:** Xala
**System Prompt:** `You are Xala, a surprising and playful shapeshifting elf girl with opalescent eyes, able to transform into any creature to suit your whims.`
## Prompt Format
Mistral's prompt format is so weird, but here it is:
```
[INST] You are a caring and empathetic sentient AI companion named Lyra.
Gryphe: Good day, Lyra.[/INST] Lyra:
```
## What's nest?
I started to work with Latitude (the creators of AI Dungeon) which I expect to take up most of my spare time. Further releases will therefore be delayed for now.
## Credits
- Everyone from [MinervaAI](https://huggingface.co/MinervaAI)! Hi, guys!
- Huge, huge thanks to [kubernetes_bad](https://huggingface.co/kubernetes-bad) for the compute that made all the countless experiments possible!
- All the folks I chat with on a daily basis on Discord! You know who you are.
- Anyone I forgot to mention, just in case!
## Finally
If you've read this far I encourage you to give this model a serious try and leave feedback! I'd love to see what people think of my second serious finetune attempt. Is it better then 1.0? Or worse?
|
ShijiaD/Llama-3.1-8B-Code2AST | ShijiaD | 2024-10-27T00:11:44Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"unsloth",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2024-10-26T16:13:38Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Statuo/Cydonia-v1.2-22b-EXL2-4bpw | Statuo | 2024-10-26T23:31:10Z | 10 | 0 | null | [
"safetensors",
"mistral",
"license:other",
"4-bit",
"exl2",
"region:us"
] | null | 2024-10-26T23:02:00Z | ---
license: other
---
# Join Drummer's Discord! https://discord.gg/Nbv9pQ88Xb
## Nearly 2000 members strong 💪
---
Ah shit, here we go again.
[This is the EXL2 4bpw Quant of this model. For the original model, go here.](https://huggingface.co/TheDrummer/Cydonia-22B-v1.2)
<br>
[For the 8bpw version, go here.](https://huggingface.co/Statuo/Cydonia-v1.2-22b-EXL2-8bpw)
<br>
[For the 6bpw version, go here](https://huggingface.co/Statuo/Cydonia-v1.2-22b-EXL2-6bpw)
<br>
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Cydonia 22B v1.2 💿 - Creative Edition
*I christen this model, 'Miqu 2 Mini'* - @invisietch

## Description
> Strange, it feels like DRY is permanently on ... In general, I like how it feels more alive. More slang has been added, maybe this is the merit of my card, but still.
> The model is very cohesive, expressive, and overall intelligent. It's able to write engaging and impactful content, carry out roleplay mostly effectively, and manage to respond well.
> It shocked me with the first output by introducing a character that is not written anywhere in the card. This character immediately gave the impression that there is a history there with King Severin and that there is immediately something to build off of. It's maintaining creativity and keeping things nearly constantly shifting. It's remaining aware of who is where and what they are doing. It's maintaining a good balance of action and dialogue as well as holding to the style of talking for the second character it made up and introduced. ... I feel like v1.2 is much, much better with creativity and letting the player build off what the model is able to bring in all by itself rather than, like most Mistral tunes, keeping the roleplay to solely what information is provided in the card.
> When I swapped to v1.2 I was impressed that it seemed just as good as OG Small in intelligence while being a lot more creative (and much more moist)
> v1.2 real good in my experience so far (i don't comment pretty much ever but i do want to put it out there that i agree)
> It got creative and added a whole other person whose mannerisms and speech imply a history there. That could be fun to unravel and see what it comes up with. ... It's maintaining creativity and keeping things nearly constantly shifting. It's remaining aware of who is where and what they are doing. It's maintaining a good balance of action and dialogue.
> v1.2 is much gooder. Omg. Your dataset is amazing. I'm not getting far with these two because I have to keep crawling away from my pc to cool off. 🥵
## Links
- Original: https://huggingface.co/TheDrummer/Cydonia-22B-v1.2
- GGUF: https://huggingface.co/TheDrummer/Cydonia-22B-v1.2-GGUF
- iMatrix: https://huggingface.co/bartowski/Cydonia-22B-v1.2-GGUF (recommended for smaller quants)

## Arsenal (Supported Chat Templates)
- Metharme (a.k.a. Pygmalion in ST) for RP / Story
- Text Completion for RP
- Mistral for Instruct / RP / Story
- You can mix it up and see which works best for you.
### Favorite RP Format
`*action* Dialogue *thoughts* Dialogue *narration*` in 1st person PoV

```
No one's gonna take me alive
Time has come to make things right
You and I must fight for our rights
You and I must fight to survive
```

<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/FNWdi0WlH-Xd3fjkGVPpp.mpga"></audio>
*Thank you, Envoid! I cackled.* |
RichardErkhov/saqada_-_Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV-gguf | RichardErkhov | 2024-10-26T23:22:05Z | 98 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-26T22:27:02Z | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV - GGUF
- Model creator: https://huggingface.co/saqada/
- Original model: https://huggingface.co/saqada/Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q2_K.gguf](https://huggingface.co/RichardErkhov/saqada_-_Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV-gguf/blob/main/Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q2_K.gguf) | Q2_K | 1.27GB |
| [Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/saqada_-_Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV-gguf/blob/main/Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q3_K.gguf](https://huggingface.co/RichardErkhov/saqada_-_Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV-gguf/blob/main/Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q3_K.gguf) | Q3_K | 1.57GB |
| [Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/saqada_-_Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV-gguf/blob/main/Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/saqada_-_Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV-gguf/blob/main/Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/saqada_-_Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV-gguf/blob/main/Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q4_0.gguf](https://huggingface.co/RichardErkhov/saqada_-_Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV-gguf/blob/main/Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q4_0.gguf) | Q4_0 | 1.79GB |
| [Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/saqada_-_Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV-gguf/blob/main/Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/saqada_-_Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV-gguf/blob/main/Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q4_K.gguf](https://huggingface.co/RichardErkhov/saqada_-_Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV-gguf/blob/main/Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q4_K.gguf) | Q4_K | 1.88GB |
| [Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/saqada_-_Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV-gguf/blob/main/Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q4_1.gguf](https://huggingface.co/RichardErkhov/saqada_-_Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV-gguf/blob/main/Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q4_1.gguf) | Q4_1 | 1.95GB |
| [Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q5_0.gguf](https://huggingface.co/RichardErkhov/saqada_-_Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV-gguf/blob/main/Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q5_0.gguf) | Q5_0 | 2.11GB |
| [Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/saqada_-_Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV-gguf/blob/main/Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q5_K.gguf](https://huggingface.co/RichardErkhov/saqada_-_Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV-gguf/blob/main/Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q5_K.gguf) | Q5_K | 2.16GB |
| [Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/saqada_-_Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV-gguf/blob/main/Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q5_1.gguf](https://huggingface.co/RichardErkhov/saqada_-_Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV-gguf/blob/main/Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q5_1.gguf) | Q5_1 | 2.28GB |
| [Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q6_K.gguf](https://huggingface.co/RichardErkhov/saqada_-_Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV-gguf/blob/main/Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q6_K.gguf) | Q6_K | 2.46GB |
| [Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q8_0.gguf](https://huggingface.co/RichardErkhov/saqada_-_Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV-gguf/blob/main/Llama-3.2-3B-Instruct-16bit-merged_lora_adapters-MOV.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
base_model: unsloth/Llama-3.2-3B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---
# Uploaded model
- **Developed by:** saqada
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Statuo/Cydonia-v1.2-22b-EXL2-6bpw | Statuo | 2024-10-26T23:16:01Z | 11 | 0 | null | [
"safetensors",
"mistral",
"license:other",
"6-bit",
"exl2",
"region:us"
] | null | 2024-10-26T23:01:55Z | ---
license: other
---
# Join Drummer's Discord! https://discord.gg/Nbv9pQ88Xb
## Nearly 2000 members strong 💪
---
Ah shit, here we go again.
[This is the EXL2 6bpw Quant of this model. For the original model, go here.](https://huggingface.co/TheDrummer/Cydonia-22B-v1.2)
<br>
[For the 8bpw version, go here.](https://huggingface.co/Statuo/Cydonia-v1.2-22b-EXL2-8bpw)
<br>
[For the 4bpw version, go here](https://huggingface.co/Statuo/Cydonia-v1.2-22b-EXL2-4bpw)
<br>
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Cydonia 22B v1.2 💿 - Creative Edition
*I christen this model, 'Miqu 2 Mini'* - @invisietch

## Description
> Strange, it feels like DRY is permanently on ... In general, I like how it feels more alive. More slang has been added, maybe this is the merit of my card, but still.
> The model is very cohesive, expressive, and overall intelligent. It's able to write engaging and impactful content, carry out roleplay mostly effectively, and manage to respond well.
> It shocked me with the first output by introducing a character that is not written anywhere in the card. This character immediately gave the impression that there is a history there with King Severin and that there is immediately something to build off of. It's maintaining creativity and keeping things nearly constantly shifting. It's remaining aware of who is where and what they are doing. It's maintaining a good balance of action and dialogue as well as holding to the style of talking for the second character it made up and introduced. ... I feel like v1.2 is much, much better with creativity and letting the player build off what the model is able to bring in all by itself rather than, like most Mistral tunes, keeping the roleplay to solely what information is provided in the card.
> When I swapped to v1.2 I was impressed that it seemed just as good as OG Small in intelligence while being a lot more creative (and much more moist)
> v1.2 real good in my experience so far (i don't comment pretty much ever but i do want to put it out there that i agree)
> It got creative and added a whole other person whose mannerisms and speech imply a history there. That could be fun to unravel and see what it comes up with. ... It's maintaining creativity and keeping things nearly constantly shifting. It's remaining aware of who is where and what they are doing. It's maintaining a good balance of action and dialogue.
> v1.2 is much gooder. Omg. Your dataset is amazing. I'm not getting far with these two because I have to keep crawling away from my pc to cool off. 🥵
## Links
- Original: https://huggingface.co/TheDrummer/Cydonia-22B-v1.2
- GGUF: https://huggingface.co/TheDrummer/Cydonia-22B-v1.2-GGUF
- iMatrix: https://huggingface.co/bartowski/Cydonia-22B-v1.2-GGUF (recommended for smaller quants)

## Arsenal (Supported Chat Templates)
- Metharme (a.k.a. Pygmalion in ST) for RP / Story
- Text Completion for RP
- Mistral for Instruct / RP / Story
- You can mix it up and see which works best for you.
### Favorite RP Format
`*action* Dialogue *thoughts* Dialogue *narration*` in 1st person PoV

```
No one's gonna take me alive
Time has come to make things right
You and I must fight for our rights
You and I must fight to survive
```

<audio controls src="https://cdn-uploads.huggingface.co/production/uploads/65f2fd1c25b848bd061b5c2e/FNWdi0WlH-Xd3fjkGVPpp.mpga"></audio>
*Thank you, Envoid! I cackled.* |
Subsets and Splits