modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 18:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 18:24:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
BricksDisplay/vits-eng-welsh-female
|
BricksDisplay
| 2024-10-17T11:58:43Z | 7 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"vits",
"text-to-speech",
"base_model:ylacombe/vits_ljs_welsh_female_monospeaker",
"base_model:quantized:ylacombe/vits_ljs_welsh_female_monospeaker",
"region:us"
] |
text-to-speech
| 2024-10-17T11:57:26Z |
---
base_model:
- ylacombe/vits_ljs_welsh_female_monospeaker
pipeline_tag: text-to-speech
library_name: transformers.js
---
Convert from `ylacombe/vits_ljs_welsh_female_monospeaker`
|
SidXXD/19
|
SidXXD
| 2024-10-17T11:53:02Z | 6 | 0 |
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] |
text-to-image
| 2024-10-16T20:27:29Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: photo of a <v1*> person
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/19
These are Custom Diffusion adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on photo of a <v1*> person using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
lemon07r/Gemma-2-Ataraxy-v4c-9B
|
lemon07r
| 2024-10-17T11:51:53Z | 20 | 4 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:lemon07r/Gemma-2-Ataraxy-v3b-9B",
"base_model:merge:lemon07r/Gemma-2-Ataraxy-v3b-9B",
"base_model:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"base_model:merge:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-16T15:02:24Z |
---
library_name: transformers
tags:
- mergekit
- merge
base_model:
- lemon07r/Gemma-2-Ataraxy-v3b-9B
- zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
model-index:
- name: Gemma-2-Ataraxy-v4c-9B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 69.45
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4c-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 44.13
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4c-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 17.98
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4c-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 11.19
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4c-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 15.3
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4c-9B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 37.72
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v4c-9B
name: Open LLM Leaderboard
---
# Gemma-2-Ataraxy-v4c-9B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [lemon07r/Gemma-2-Ataraxy-v3b-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v3b-9B)
* [zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25](https://huggingface.co/zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
dtype: bfloat16
merge_method: slerp
parameters:
t: 0.25
slices:
- sources:
- layer_range: [0, 42]
model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
- layer_range: [0, 42]
model: lemon07r/Gemma-2-Ataraxy-v3b-9B
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lemon07r__Gemma-2-Ataraxy-v4c-9B)
| Metric |Value|
|-------------------|----:|
|Avg. |32.63|
|IFEval (0-Shot) |69.45|
|BBH (3-Shot) |44.13|
|MATH Lvl 5 (4-Shot)|17.98|
|GPQA (0-shot) |11.19|
|MuSR (0-shot) |15.30|
|MMLU-PRO (5-shot) |37.72|
|
Ftmhd/my_awesome_model
|
Ftmhd
| 2024-10-17T11:51:11Z | 104 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"base_model:microsoft/deberta-v3-small",
"base_model:finetune:microsoft/deberta-v3-small",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-15T19:40:03Z |
---
library_name: transformers
license: mit
base_model: microsoft/deberta-v3-small
tags:
- generated_from_trainer
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [microsoft/deberta-v3-small](https://huggingface.co/microsoft/deberta-v3-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
Vahan123/xlm-roberta-base-finetuned-ner
|
Vahan123
| 2024-10-17T11:49:03Z | 26 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-09-20T11:40:41Z |
---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlm-roberta-base-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-ner
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0666
- Precision: 0.9293
- Recall: 0.9362
- F1: 0.9327
- Accuracy: 0.9819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0724 | 1.0 | 54249 | 0.0649 | 0.9129 | 0.9185 | 0.9157 | 0.9784 |
| 0.0593 | 2.0 | 108498 | 0.0608 | 0.9292 | 0.9250 | 0.9271 | 0.9802 |
| 0.0483 | 3.0 | 162747 | 0.0595 | 0.9216 | 0.9324 | 0.9270 | 0.9812 |
| 0.041 | 4.0 | 216996 | 0.0627 | 0.9183 | 0.9361 | 0.9271 | 0.9817 |
| 0.0345 | 5.0 | 271245 | 0.0666 | 0.9293 | 0.9362 | 0.9327 | 0.9819 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.0
- Tokenizers 0.19.1
|
ndrushchak/ukr_gender_classifier
|
ndrushchak
| 2024-10-17T11:44:55Z | 107 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-13T15:57:08Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
alpcansoydas/product-model-17.10.24-bert-total27label_ifhavemorethan100sampleperfamily
|
alpcansoydas
| 2024-10-17T11:43:51Z | 163 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-17T11:43:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
QuantFactory/NovaSpark-GGUF
|
QuantFactory
| 2024-10-17T11:17:54Z | 47 | 1 |
transformers
|
[
"transformers",
"gguf",
"generated_from_trainer",
"dataset:Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned",
"dataset:anthracite-org/stheno-filtered-v1.1",
"dataset:PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT",
"dataset:Gryphe/Sonnet3.5-Charcard-Roleplay",
"dataset:Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned",
"dataset:anthracite-org/kalo-opus-instruct-22k-no-refusal",
"dataset:anthracite-org/nopm_claude_writing_fixed",
"dataset:anthracite-org/kalo_opus_misc_240827",
"base_model:grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B",
"base_model:quantized:grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-17T10:36:35Z |
---
library_name: transformers
license: apache-2.0
base_model:
- grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B
tags:
- generated_from_trainer
datasets:
- Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
- anthracite-org/stheno-filtered-v1.1
- PJMixers/hieunguyenminh_roleplay-deduped-ShareGPT
- Gryphe/Sonnet3.5-Charcard-Roleplay
- Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned
- anthracite-org/kalo-opus-instruct-22k-no-refusal
- anthracite-org/nopm_claude_writing_fixed
- anthracite-org/kalo_opus_misc_240827
model-index:
- name: Epiculous/NovaSpark
results: []
---
[](https://hf.co/QuantFactory)
# QuantFactory/NovaSpark-GGUF
This is quantized version of [Epiculous/NovaSpark](https://huggingface.co/Epiculous/NovaSpark) created using llama.cpp
# Original Model Card

Switching things up a bit since the last slew of models were all 12B, we now have NovaSpark! NovaSpark is an 8B model trained on GrimJim's [abliterated](https://huggingface.co/grimjim/Llama-3.1-SuperNova-Lite-lorabilterated-8B) version of arcee's [SuperNova-lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite).
The hope is abliteration will remove some of the inherant refusals and censorship of the original model, however I noticed that finetuning on GrimJim's model undid some of the abliteration, therefore more than likely abiliteration will have to be reapplied to the resulting model to reinforce it.
# Quants!
<strong>full</strong> / [exl2](https://huggingface.co/Epiculous/NovaSpark-exl2) / [gguf](https://huggingface.co/Epiculous/NovaSpark-GGUF)
## Prompting
This model is trained on llama instruct template, the prompting structure goes a little something like this:
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
### Context and Instruct
This model is trained on llama-instruct, please use that Context and Instruct template.
### Current Top Sampler Settings
[Smooth Creativity](https://files.catbox.moe/0ihfir.json): Credit to Juelsman for researching this one!<br/>
[Variant Chimera](https://files.catbox.moe/h7vd45.json): Credit to Numbra!<br/>
[Spicy_Temp](https://files.catbox.moe/9npj0z.json) <br/>
[Violet_Twilight-Nitral-Special](https://files.catbox.moe/ot54u3.json) <br/>
|
BricksDisplay/vits-cmn
|
BricksDisplay
| 2024-10-17T11:09:12Z | 9 | 4 |
transformers.js
|
[
"transformers.js",
"onnx",
"safetensors",
"vits",
"text-to-audio",
"text-to-speech",
"zh",
"license:apache-2.0",
"region:us"
] |
text-to-speech
| 2024-01-10T07:54:50Z |
---
license: apache-2.0
language:
- zh
library_name: transformers.js
pipeline_tag: text-to-speech
---
# VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech
VITS is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
## Model Details
Languages: Chinese
Dataset: THCHS-30
Speakers: 44
Training Hours: 48
## Usage
Using this checkpoint from Hugging Face Transformers:
```py
from transformers import VitsModel, VitsTokenizer
from pypinyin import lazy_pinyin, Style
import torch
model = VitsModel.from_pretrained("BricksDisplay/vits-cmn")
tokenizer = VitsTokenizer.from_pretrained("BricksDisplay/vits-cmn")
text = "中文"
payload = ''.join(lazy_pinyin(text, style=Style.TONE, tone_sandhi=True))
inputs = tokenizer(payload, return_tensors="pt")
with torch.no_grad():
output = model(**inputs, speaker_id=0)
from IPython.display import Audio
Audio(output.audio[0], rate=16000)
```
Using this checkpoint from Transformers.js:
```js
import { pipeline } from '@xenova/transformers';
import { pinyin } from 'pinyin-pro'; // Our use-case, using `pinyin-pro`
const synthesizer = await pipeline('text-to-audio', 'BricksDisplay/vits-cmn', { quantized: false })
console.log(await synthesizer(pinyin("中文")))
// {
// audio: Float32Array(?) [ ... ],
// sampling_rate: 16000
// }
```
Note: Transformers.js (ONNX) version does not support speaker_id, so it will fixed in 0
|
tahsinashrafee/nav_desc_Qwen2.5-3B-Instruct
|
tahsinashrafee
| 2024-10-17T11:07:34Z | 6 | 0 | null |
[
"safetensors",
"gguf",
"qwen2",
"navigation",
"description",
"unsloth",
"text-generation",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-3B-Instruct",
"license:unknown",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T10:02:27Z |
---
base_model:
- Qwen/Qwen2.5-3B-Instruct
language:
- en
license: unknown
pipeline_tag: text-generation
tags:
- navigation
- description
- unsloth
---
|
TheImam/Labaynak
|
TheImam
| 2024-10-17T11:01:57Z | 35 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T10:56:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Egdal/distilbert-base-uncased-distilled-clinc
|
Egdal
| 2024-10-17T10:53:20Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-17T09:30:30Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2176
- Accuracy: 0.9526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00010640681552913214
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 12
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.4706 | 1.0 | 318 | 0.4007 | 0.9226 |
| 0.2466 | 2.0 | 636 | 0.2741 | 0.9432 |
| 0.1424 | 3.0 | 954 | 0.2488 | 0.9423 |
| 0.1141 | 4.0 | 1272 | 0.2363 | 0.9487 |
| 0.1029 | 5.0 | 1590 | 0.2263 | 0.9497 |
| 0.0964 | 6.0 | 1908 | 0.2228 | 0.9510 |
| 0.0926 | 7.0 | 2226 | 0.2160 | 0.9529 |
| 0.0905 | 8.0 | 2544 | 0.2186 | 0.9503 |
| 0.0881 | 9.0 | 2862 | 0.2174 | 0.9542 |
| 0.0871 | 10.0 | 3180 | 0.2193 | 0.9532 |
| 0.0859 | 11.0 | 3498 | 0.2173 | 0.9523 |
| 0.0855 | 12.0 | 3816 | 0.2176 | 0.9526 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Tokenizers 0.19.1
|
nithish27022003/sentiment_analysis_v1
|
nithish27022003
| 2024-10-17T10:52:16Z | 109 | 1 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"en",
"dataset:stanfordnlp/imdb",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-17T10:48:42Z |
---
license: apache-2.0
datasets:
- stanfordnlp/imdb
language:
- en
metrics:
- accuracy
base_model:
- google-bert/bert-base-uncased
new_version: google-bert/bert-base-uncased
library_name: transformers
---
|
Triangle104/Llama3.1-Allades-8B-Q5_K_M-GGUF
|
Triangle104
| 2024-10-17T10:48:15Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:nbeerbower/gutenberg2-dpo",
"dataset:jondurbin/truthy-dpo-v0.1",
"dataset:kyujinpy/orca_math_dpo",
"dataset:antiven0m/physical-reasoning-dpo",
"base_model:nbeerbower/Llama3.1-Allades-8B",
"base_model:quantized:nbeerbower/Llama3.1-Allades-8B",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-17T10:47:04Z |
---
base_model: nbeerbower/Llama3.1-Allades-8B
datasets:
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
- jondurbin/truthy-dpo-v0.1
- kyujinpy/orca_math_dpo
- antiven0m/physical-reasoning-dpo
library_name: transformers
license: llama3.1
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Llama3.1-Allades-8B-Q5_K_M-GGUF
This model was converted to GGUF format from [`nbeerbower/Llama3.1-Allades-8B`](https://huggingface.co/nbeerbower/Llama3.1-Allades-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/Llama3.1-Allades-8B) for more details on the model.
---
Model details:
-
Allades finetunes abliterated Llama 3.1 with 5 datasets to improve creative writing, reasoning, and roleplay.
Datasets
jondurbin/gutenberg-dpo-v0.1
nbeerbower/gutenberg2-dpo
jondurbin/truthy-dpo-v0.1
kyujinpy/orca_math_dpo
antiven0m/physical-reasoning-dpo
Training
ORPO tuned for 1 epoch with 2x RTX 3090 (sponsored by Schneewolf Labs).
Data was prepared with Llama 3.1 Instruct.
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Llama3.1-Allades-8B-Q5_K_M-GGUF --hf-file llama3.1-allades-8b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Llama3.1-Allades-8B-Q5_K_M-GGUF --hf-file llama3.1-allades-8b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Llama3.1-Allades-8B-Q5_K_M-GGUF --hf-file llama3.1-allades-8b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Llama3.1-Allades-8B-Q5_K_M-GGUF --hf-file llama3.1-allades-8b-q5_k_m.gguf -c 2048
```
|
vpkprasanna/rera_lora_model_qwen2_merged
|
vpkprasanna
| 2024-10-17T10:28:06Z | 125 | 0 |
transformers
|
[
"transformers",
"pytorch",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T10:26:01Z |
---
base_model: unsloth/qwen2.5-1.5b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
# Uploaded model
- **Developed by:** vpkprasanna
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-1.5b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Egdal/distilbert-base-uncased-finetuned-clinc
|
Egdal
| 2024-10-17T10:28:03Z | 8 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-17T08:37:06Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2821
- Accuracy: 0.9461
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.208 | 1.0 | 318 | 3.1584 | 0.7432 |
| 2.4171 | 2.0 | 636 | 1.5856 | 0.8629 |
| 1.1877 | 3.0 | 954 | 0.7955 | 0.9135 |
| 0.5858 | 4.0 | 1272 | 0.4856 | 0.9290 |
| 0.3173 | 5.0 | 1590 | 0.3597 | 0.9377 |
| 0.1963 | 6.0 | 1908 | 0.3174 | 0.94 |
| 0.1395 | 7.0 | 2226 | 0.2890 | 0.9461 |
| 0.1093 | 8.0 | 2544 | 0.2863 | 0.9445 |
| 0.0957 | 9.0 | 2862 | 0.2833 | 0.9445 |
| 0.09 | 10.0 | 3180 | 0.2821 | 0.9461 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Tokenizers 0.19.1
|
vpkprasanna/rera_lora_model_qwen2
|
vpkprasanna
| 2024-10-17T10:21:31Z | 101 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T10:18:33Z |
---
base_model: unsloth/qwen2.5-1.5b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
---
# Uploaded model
- **Developed by:** vpkprasanna
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-1.5b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
rosadecsai/t5-small-finetuned-paper
|
rosadecsai
| 2024-10-17T10:13:56Z | 114 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-10-17T06:11:59Z |
---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-finetuned-paper
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-paper
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6578
- Rouge1: 7.1584
- Rouge2: 2.1023
- Rougel: 5.6927
- Rougelsum: 6.8094
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 3.8957 | 1.0 | 1124 | 3.6578 | 7.1584 | 2.1023 | 5.6927 | 6.8094 | 19.0 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
yscode/opt-125m-gptq
|
yscode
| 2024-10-17T10:02:43Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-10-17T07:53:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
QuantFactory/CursorCore-QW2.5-1.5B-SR-GGUF
|
QuantFactory
| 2024-10-17T09:56:58Z | 65 | 1 |
transformers
|
[
"transformers",
"gguf",
"code",
"text-generation",
"arxiv:2410.07002",
"base_model:Qwen/Qwen2.5-Coder-1.5B",
"base_model:quantized:Qwen/Qwen2.5-Coder-1.5B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-10-17T09:49:20Z |
---
tags:
- code
base_model:
- Qwen/Qwen2.5-Coder-1.5B
library_name: transformers
pipeline_tag: text-generation
license: apache-2.0
---
[](https://hf.co/QuantFactory)
# QuantFactory/CursorCore-QW2.5-1.5B-SR-GGUF
This is quantized version of [TechxGenus/CursorCore-QW2.5-1.5B-SR](https://huggingface.co/TechxGenus/CursorCore-QW2.5-1.5B-SR) created using llama.cpp
# Original Model Card
# CursorCore: Assist Programming through Aligning Anything
<p align="center">
<a href="http://arxiv.org/abs/2410.07002">[📄arXiv]</a> |
<a href="https://hf.co/papers/2410.07002">[🤗HF Paper]</a> |
<a href="https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2">[🤖Models]</a> |
<a href="https://github.com/TechxGenus/CursorCore">[🛠️Code]</a> |
<a href="https://github.com/TechxGenus/CursorWeb">[Web]</a> |
<a href="https://discord.gg/Z5Tev8fV">[Discord]</a>
</p>
<hr>
- [CursorCore: Assist Programming through Aligning Anything](#cursorcore-assist-programming-through-aligning-anything)
- [Introduction](#introduction)
- [Models](#models)
- [Usage](#usage)
- [1) Normal chat](#1-normal-chat)
- [2) Assistant-Conversation](#2-assistant-conversation)
- [3) Web Demo](#3-web-demo)
- [Future Work](#future-work)
- [Citation](#citation)
- [Contribution](#contribution)
<hr>
## Introduction
CursorCore is a series of open-source models designed for AI-assisted programming. It aims to support features such as automated editing and inline chat, replicating the core abilities of closed-source AI-assisted programming tools like Cursor. This is achieved by aligning data generated through Programming-Instruct. Please read [our paper](http://arxiv.org/abs/2410.07002) to learn more.
<p align="center">
<img width="100%" alt="conversation" src="https://raw.githubusercontent.com/TechxGenus/CursorCore/main/pictures/conversation.png">
</p>

## Models
Our models have been open-sourced on Hugging Face. You can access our models here: [CursorCore-Series](https://huggingface.co/collections/TechxGenus/cursorcore-series-6706618c38598468866b60e2"). We also provide pre-quantized weights for GPTQ and AWQ here: [CursorCore-Quantization](https://huggingface.co/collections/TechxGenus/cursorcore-quantization-67066431f29f252494ee8cf3)
## Usage
Here are some examples of how to use our model:
### 1) Normal chat
Script:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
messages = [
{"role": "user", "content": "Hi!"},
]
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512)
print(tokenizer.decode(outputs[0]))
````
Output:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>user
Hi!<|im_end|>
<|im_start|>assistant
Hello! I'm an AI language model and I can help you with any programming questions you might have. What specific problem or task are you trying to solve?<|im_end|>
````
### 2) Assistant-Conversation
In our work, we introduce a new framework of AI-assisted programming task. It is designed for aligning anything during programming process, used for the implementation of features like Tab and Inline Chat.
Script 1:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [
{
"type": "code",
"lang": "python",
"code": """def quick_sort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
}
],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": ""
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 1:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>history
```python
def quick_sort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
if len(array) <= 1:
return array
pivot = array[len(array) // 2]
left = [x for x in array if x < pivot]
middle = [x for x in array if x == pivot]
right = [x for x in array if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|>
The user has provided a revised code change that corrects the parameter name from `arr` to `array` in the `quick_sort` function. This change ensures consistency in the function definition and avoids potential confusion or errors.
To implement this, we will:
1. Update the parameter name in the function definition from `arr` to `array`.
2. Ensure that all references to `arr` within the function are updated to `array`.
This will make the function definition and internal references consistent, improving code readability and maintainability.<|im_end|>
````
Script 2:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_wf
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-9B")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-9B",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_wf(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output 2:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
"""
This is an implementation of the quick sort algorithm.
"""
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|next_end|><|im_end|>
````
For models in Locate-and-Change (LC) and Search-and-Replace (SR) formats, the output examples are as follows:
Script for LC:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_lc
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-LC")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-LC",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_lc(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for LC:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
1 def quick_sort(array):
2 if len(arr) <= 1:
3 return arr
4 pivot = arr[len(arr) // 2]
5 left = [x for x in arr if x < pivot]
6 middle = [x for x in arr if x == pivot]
7 right = [x for x in arr if x > pivot]
8 return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>1,1
```
'''This function will sort an array using quick sort algorithm'''
```<|next_end|>
To enhance the readability and maintainability of the code, we should add a docstring to the `quick_sort` function. A docstring provides a clear description of what the function does, which is particularly useful for other developers who might use or modify this code in the future.
The docstring will be added immediately after the function definition, explaining that the function uses the quick sort algorithm to sort an array. This will make the code more self-explanatory and easier to understand.
Here's the plan:
1. Add a docstring at the beginning of the `quick_sort` function.
2. Ensure the docstring is clear and concise, describing the purpose of the function.
This modification will improve the code's documentation without altering its functionality.<|im_end|>
````
Script for SR:
````python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from eval.utils import prepare_input_for_sr
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/CursorCore-Yi-1.5B-SR")
model = AutoModelForCausalLM.from_pretrained(
"TechxGenus/CursorCore-Yi-1.5B-SR",
torch_dtype=torch.bfloat16,
device_map="auto"
)
sample = {
"history": [],
"current": {
"type": "code",
"lang": "python",
"code": """def quick_sort(array):\n if len(arr) <= 1:\n return arr\n pivot = arr[len(arr) // 2]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quick_sort(left) + middle + quick_sort(right)"""
},
"user": "Add Docstring."
}
prompt = tokenizer.apply_chat_template(
prepare_input_for_sr(sample),
tokenize=False,
chat_template="assistant-conversation",
add_generation_prompt=True
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=512, do_sample=False)
print(tokenizer.decode(outputs[0]))
````
Output for SR:
````txt
<|im_start|>system
You are a helpful programming assistant.<|im_end|>
<|im_start|>current
```python
def quick_sort(array):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quick_sort(left) + middle + quick_sort(right)
```<|im_end|>
<|im_start|>user
Add Docstring.<|im_end|>
<|im_start|>assistant
<|next_start|>```python
def quick_sort(array):
<|search_and_replace|>
def quick_sort(array):
"""
This function implements quick sort algorithm
"""
```<|next_end|><|im_end|>
````
### 3) Web Demo
We create a web demo for CursorCore. Please visit [CursorWeb](https://github.com/TechxGenus/CursorWeb) for more details.
## Future Work
CursorCore is still in a very early stage, and lots of work is needed to achieve a better user experience. For example:
- Repository-level editing support
- Better and faster editing formats
- Better user interface and presentation
- ...
## Citation
```bibtex
@article{jiang2024cursorcore,
title = {CursorCore: Assist Programming through Aligning Anything},
author = {Hao Jiang and Qi Liu and Rui Li and Shengyu Ye and Shijin Wang},
year = {2024},
journal = {arXiv preprint arXiv: 2410.07002}
}
```
## Contribution
Contributions are welcome! If you find any bugs or have suggestions for improvements, please open an issue or submit a pull request.
|
laurabraad/distilbert-base-uncased-finetuned-clinc
|
laurabraad
| 2024-10-17T09:54:50Z | 5 | 0 | null |
[
"pytorch",
"tensorboard",
"distilbert",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2024-10-04T06:35:22Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9180645161290323
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7721
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2895 | 1.0 | 318 | 3.2884 | 0.7419 |
| 2.6277 | 2.0 | 636 | 1.8751 | 0.8368 |
| 1.5479 | 3.0 | 954 | 1.1569 | 0.8961 |
| 1.0148 | 4.0 | 1272 | 0.8573 | 0.9132 |
| 0.7952 | 5.0 | 1590 | 0.7721 | 0.9181 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.4.1+cu121
- Datasets 1.16.1
- Tokenizers 0.19.1
|
bekzod1/inventory-management
|
bekzod1
| 2024-10-17T09:52:03Z | 83 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-10-16T16:48:54Z |
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** bekzod1
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
nclgbd/llava-med-v1.5-mistral-7b-pretrain-test
|
nclgbd
| 2024-10-17T09:46:41Z | 6 | 0 | null |
[
"safetensors",
"llava_llama",
"generated_from_trainer",
"base_model:microsoft/llava-med-v1.5-mistral-7b",
"base_model:finetune:microsoft/llava-med-v1.5-mistral-7b",
"license:apache-2.0",
"region:us"
] | null | 2024-10-17T08:39:12Z |
---
base_model: microsoft/llava-med-v1.5-mistral-7b
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: llava-med-v1.5-mistral-7b-pretrain-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llava-med-v1.5-mistral-7b-pretrain-test
This model is a fine-tuned version of [microsoft/llava-med-v1.5-mistral-7b](https://huggingface.co/microsoft/llava-med-v1.5-mistral-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 50
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.0.1+cu117
- Datasets 2.16.0
- Tokenizers 0.15.1
|
mradermacher/Gemma-2-Ataraxy-v4c-9B-GGUF
|
mradermacher
| 2024-10-17T09:40:06Z | 60 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:lemon07r/Gemma-2-Ataraxy-v4c-9B",
"base_model:quantized:lemon07r/Gemma-2-Ataraxy-v4c-9B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-16T21:48:05Z |
---
base_model: lemon07r/Gemma-2-Ataraxy-v4c-9B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v4c-9B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF
|
mradermacher
| 2024-10-17T09:40:06Z | 79 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:lemon07r/Gemma-2-Ataraxy-v4c-9B",
"base_model:quantized:lemon07r/Gemma-2-Ataraxy-v4c-9B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-17T08:12:59Z |
---
base_model: lemon07r/Gemma-2-Ataraxy-v4c-9B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v4c-9B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 5.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 5.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 5.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4c-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4c-9B.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
rahul-bhoyar-1995/reuters-gpt2-text-gen
|
rahul-bhoyar-1995
| 2024-10-17T09:39:05Z | 137 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"gpt2",
"text-generation",
"generated_from_trainer",
"base_model:rahul-bhoyar-1995/reuters-gpt2-text-gen",
"base_model:finetune:rahul-bhoyar-1995/reuters-gpt2-text-gen",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T09:05:51Z |
---
library_name: transformers
license: mit
base_model: rahul-bhoyar-1995/reuters-gpt2-text-gen
tags:
- generated_from_trainer
model-index:
- name: reuters-gpt2-text-gen
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reuters-gpt2-text-gen
This model is a fine-tuned version of [rahul-bhoyar-1995/reuters-gpt2-text-gen](https://huggingface.co/rahul-bhoyar-1995/reuters-gpt2-text-gen) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.3160
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.0816 | 0.9940 | 125 | 5.3160 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.2.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
natriistorm/DeepPavlov-ABSA
|
natriistorm
| 2024-10-17T09:38:41Z | 107 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"xlm-roberta",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-10-07T08:28:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
fabdream/Comicbook-vintage
|
fabdream
| 2024-10-17T09:37:43Z | 36 | 1 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2024-10-17T09:36:30Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
comic book art, a handsome male astronaut wearing sleek modern spacesuit
holding a retro laser pistol on (A massive, ancient spaceship floating
derelict in deep space.), retro sci-fi style, dynamic lighting, digital art,
cinematic shot, fantastically beautiful, illustration, aesthetically
inspired by classic sci-fi movies, by Paul Lehr by Jon Whitcomb, samdoesart,
dreamlikeart.
output:
url: images/Comic book V2, strength 1.5_00006.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Comic book style
---
# Comicbook-vintage
<Gallery />
## Trigger words
You should use `Comic book style` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/fabdream/Comicbook-vintage/tree/main) them in the Files & versions tab.
|
Gummybear05/wav2vec2-E30_freq_speed_pause
|
Gummybear05
| 2024-10-17T09:33:54Z | 21 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-10-17T08:06:03Z |
---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-E30_freq_speed_pause
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-E30_freq_speed_pause
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0912
- Cer: 46.1231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 31.7937 | 0.1289 | 200 | 5.1147 | 100.0 |
| 4.9653 | 0.2579 | 400 | 4.6684 | 100.0 |
| 4.8114 | 0.3868 | 600 | 4.6765 | 100.0 |
| 4.7658 | 0.5158 | 800 | 4.6123 | 97.7150 |
| 4.6791 | 0.6447 | 1000 | 4.6076 | 98.9544 |
| 4.6438 | 0.7737 | 1200 | 4.6205 | 97.6974 |
| 4.5903 | 0.9026 | 1400 | 4.4614 | 97.8442 |
| 4.439 | 1.0316 | 1600 | 4.4028 | 98.2848 |
| 4.1968 | 1.1605 | 1800 | 4.2323 | 94.1612 |
| 3.8917 | 1.2895 | 2000 | 3.8326 | 78.5127 |
| 3.5148 | 1.4184 | 2200 | 3.6092 | 70.7119 |
| 3.2601 | 1.5474 | 2400 | 3.3938 | 71.4873 |
| 3.0276 | 1.6763 | 2600 | 3.1059 | 64.2094 |
| 2.8883 | 1.8053 | 2800 | 2.9391 | 61.2841 |
| 2.7381 | 1.9342 | 3000 | 2.7814 | 59.1929 |
| 2.5905 | 2.0632 | 3200 | 2.5964 | 54.9988 |
| 2.4555 | 2.1921 | 3400 | 2.3926 | 51.0456 |
| 2.3566 | 2.3211 | 3600 | 2.3930 | 51.1689 |
| 2.2751 | 2.4500 | 3800 | 2.2846 | 49.4596 |
| 2.1796 | 2.5790 | 4000 | 2.1934 | 48.0028 |
| 2.1292 | 2.7079 | 4200 | 2.1426 | 47.0923 |
| 2.0724 | 2.8369 | 4400 | 2.1201 | 47.0042 |
| 2.0759 | 2.9658 | 4600 | 2.0912 | 46.1231 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
Modular/model
|
Modular
| 2024-10-17T09:29:06Z | 11 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-17T08:57:47Z |
---
base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
---
# Uploaded model
- **Developed by:** Modular
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
giordano-lucas-t/llama-3.2-1b-4-epochs_16
|
giordano-lucas-t
| 2024-10-17T09:26:33Z | 15 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:quantized:unsloth/Llama-3.2-1B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-17T09:25:51Z |
---
base_model: unsloth/Llama-3.2-1B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** giordano-lucas-t
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mateiaassAI/teacher_emo
|
mateiaassAI
| 2024-10-17T09:24:06Z | 106 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dumitrescustefan/bert-base-romanian-cased-v1",
"base_model:finetune:dumitrescustefan/bert-base-romanian-cased-v1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-17T09:23:49Z |
---
library_name: transformers
license: mit
base_model: dumitrescustefan/bert-base-romanian-cased-v1
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
- precision
- recall
model-index:
- name: teacher_emo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# teacher_emo
This model is a fine-tuned version of [dumitrescustefan/bert-base-romanian-cased-v1](https://huggingface.co/dumitrescustefan/bert-base-romanian-cased-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0567
- F1: 0.9342
- Roc Auc: 0.9586
- Accuracy: 0.926
- Precision: 0.9322
- Recall: 0.9365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|:---------:|:------:|
| 0.1525 | 1.0 | 1000 | 0.1035 | 0.8945 | 0.9306 | 0.881 | 0.9074 | 0.8835 |
| 0.0692 | 2.0 | 2000 | 0.0659 | 0.9284 | 0.9511 | 0.92 | 0.9370 | 0.922 |
| 0.0476 | 3.0 | 3000 | 0.0571 | 0.9343 | 0.9578 | 0.929 | 0.9377 | 0.9315 |
| 0.0354 | 4.0 | 4000 | 0.0567 | 0.9342 | 0.9586 | 0.926 | 0.9322 | 0.9365 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
giordano-lucas-t/llama-3.2-1b-4-epochs_8
|
giordano-lucas-t
| 2024-10-17T09:22:49Z | 9 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.2-1B-Instruct",
"base_model:quantized:unsloth/Llama-3.2-1B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-17T09:22:24Z |
---
base_model: unsloth/Llama-3.2-1B-Instruct
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
---
# Uploaded model
- **Developed by:** giordano-lucas-t
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Triangle104/Mistral-Small-Drummer-22B-Q6_K-GGUF
|
Triangle104
| 2024-10-17T09:21:15Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:nbeerbower/gutenberg2-dpo",
"base_model:nbeerbower/Mistral-Small-Drummer-22B",
"base_model:quantized:nbeerbower/Mistral-Small-Drummer-22B",
"license:other",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-17T09:12:51Z |
---
base_model: nbeerbower/Mistral-Small-Drummer-22B
datasets:
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
library_name: transformers
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
tags:
- llama-cpp
- gguf-my-repo
model-index:
- name: Mistral-Small-Drummer-22B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 63.31
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mistral-Small-Drummer-22B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 40.12
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mistral-Small-Drummer-22B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 16.69
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mistral-Small-Drummer-22B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 12.42
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mistral-Small-Drummer-22B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 9.8
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mistral-Small-Drummer-22B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 34.39
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mistral-Small-Drummer-22B
name: Open LLM Leaderboard
---
# Triangle104/Mistral-Small-Drummer-22B-Q6_K-GGUF
This model was converted to GGUF format from [`nbeerbower/Mistral-Small-Drummer-22B`](https://huggingface.co/nbeerbower/Mistral-Small-Drummer-22B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/Mistral-Small-Drummer-22B) for more details on the model.
---
Model details:
-
mistralai/Mistral-Small-Instruct-2409 finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.
Method
ORPO tuned with 2xA40 on RunPod for 1 epoch.
learning_rate=4e-6,
lr_scheduler_type="linear",
beta=0.1,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
gradient_accumulation_steps=8,
optim="paged_adamw_8bit",
num_train_epochs=1,
Dataset was prepared using Mistral-Small Instruct format.
Fine-tune Llama 3 with ORPO
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric Value
Avg. 29.45
IFEval (0-Shot) 63.31
BBH (3-Shot) 40.12
MATH Lvl 5 (4-Shot) 16.69
GPQA (0-shot) 12.42
MuSR (0-shot) 9.80
MMLU-PRO (5-shot) 34.39
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Mistral-Small-Drummer-22B-Q6_K-GGUF --hf-file mistral-small-drummer-22b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Mistral-Small-Drummer-22B-Q6_K-GGUF --hf-file mistral-small-drummer-22b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Mistral-Small-Drummer-22B-Q6_K-GGUF --hf-file mistral-small-drummer-22b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Mistral-Small-Drummer-22B-Q6_K-GGUF --hf-file mistral-small-drummer-22b-q6_k.gguf -c 2048
```
|
Jagobaemeka/my_awesome_food_model
|
Jagobaemeka
| 2024-10-17T09:16:53Z | 194 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2024-10-17T08:47:57Z |
---
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_food_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_food_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6531
- Accuracy: 0.873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.711 | 0.992 | 62 | 2.5698 | 0.801 |
| 1.8586 | 2.0 | 125 | 1.8322 | 0.852 |
| 1.6124 | 2.976 | 186 | 1.6531 | 0.873 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
sophiebui/test-translation
|
sophiebui
| 2024-10-17T09:12:29Z | 6 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"m2m_100",
"text2text-generation",
"generated_from_trainer",
"base_model:facebook/m2m100_418M",
"base_model:finetune:facebook/m2m100_418M",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-10-15T07:48:38Z |
---
library_name: transformers
license: mit
base_model: facebook/m2m100_418M
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: test-translation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-translation
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3657
- Bleu: 32.2114
- Gen Len: 13.3333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 1 | 1.4545 | 23.3073 | 14.0 |
| No log | 2.0 | 2 | 1.3870 | 32.2114 | 13.3333 |
| No log | 3.0 | 3 | 1.3657 | 32.2114 | 13.3333 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
YU310takuto/clap_emospeechtest_ver0.2
|
YU310takuto
| 2024-10-17T09:09:03Z | 9 | 0 | null |
[
"safetensors",
"clap",
"region:us"
] | null | 2024-10-17T07:45:21Z |
Hugging faceのモデルのファインチューニングのテスト。Ver0.2
使用したデータセットは、「声優統計コーパス:日本声優統計学会( https://voice-statistics.github.io/ )」を全て入れたものになります。
CLAPを学習する際の、音声に付随するキャプションは、「Japanese female actor's (感情) voice」で固定したところ、
ファインチューニングしたモデルを用いてクラス分類したときに["happy", "angry", "normal"]と["happy voice", "angry voice", "normal voice"]で結果が変わりました。
原因はまだ謎です。
また、先日アップしたVer0.1はそのうち削除します。
Hugging Faceやclapのモデルを使っている日本人の有識者がいれば、ぜひ色々教えていただきたいです。
---
Fine-tuning test of the hugging face model. Ver0.2
The dataset used was the entire "Voice Actor Statistical Corpus: Japan Voice Actor Statistical Association (https://voice-statistics.github.io/)".
When learning CLAP, the captions accompanying the voice were fixed to "Japanese female actor's (emotion) voice",
and when classifying using the fine-tuned model, the results changed between ["happy", "angry", "normal"] and ["happy voice", "angry voice", "normal voice"].
The cause is still a mystery.
Also, I will delete Ver0.1 that was uploaded the other day.
If there are experts who use Hugging Face, or "clap model", I would love to hear more about it.
---
base_model:
- laion/larger_clap_music_and_speech
tags:
- CLAP
---
|
bunnycore/mergekit-ties-tfvicst
|
bunnycore
| 2024-10-17T09:07:34Z | 48 | 0 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"mergekit",
"merge",
"conversational",
"custom_code",
"arxiv:2306.01708",
"base_model:ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1",
"base_model:merge:ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1",
"base_model:bunnycore/Phi-3.5-Mini-RP-Sonet",
"base_model:merge:bunnycore/Phi-3.5-Mini-RP-Sonet",
"base_model:bunnycore/Phi-3.5-mini-TitanFusion-0.1",
"base_model:merge:bunnycore/Phi-3.5-mini-TitanFusion-0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T09:05:37Z |
---
base_model:
- ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1
- ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1
- bunnycore/Phi-3.5-Mini-RP-Sonet
- bunnycore/Phi-3.5-mini-TitanFusion-0.1
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1) as a base.
### Models Merged
The following models were included in the merge:
* [ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1) + [bunnycore/Phi-3.5-Mini-RP-Sonet](https://huggingface.co/bunnycore/Phi-3.5-Mini-RP-Sonet)
* [bunnycore/Phi-3.5-mini-TitanFusion-0.1](https://huggingface.co/bunnycore/Phi-3.5-mini-TitanFusion-0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1+bunnycore/Phi-3.5-Mini-RP-Sonet
parameters:
weight: 1
density: 1
- model: ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1
parameters:
weight: 1
density: 1
- model: bunnycore/Phi-3.5-mini-TitanFusion-0.1
parameters:
weight: 1
density: 1
merge_method: ties
base_model: ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1
parameters:
density: 1
normalize: true
int8_mask: true
dtype: bfloat16
```
|
akthangdz/fb-tts
|
akthangdz
| 2024-10-17T09:07:34Z | 7 | 0 | null |
[
"pytorch",
"safetensors",
"vits",
"mms",
"text-to-speech",
"arxiv:2305.13516",
"license:cc-by-nc-4.0",
"region:us"
] |
text-to-speech
| 2024-10-17T09:03:05Z |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS): Vietnamese Text-to-Speech
This repository contains the **Vietnamese (vie)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
## Model Details
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
synthesise speech with different rhythms from the same input text.
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
For the MMS project, a separate VITS checkpoint is trained on each langauge.
## Usage
MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
first install the latest version of the library:
```
pip install --upgrade transformers accelerate
```
Then, run inference with the following code-snippet:
```python
from transformers import VitsModel, AutoTokenizer
import torch
model = VitsModel.from_pretrained("facebook/mms-tts-vie")
tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-vie")
text = "some example text in the Vietnamese language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs).waveform
```
The resulting waveform can be saved as a `.wav` file:
```python
import scipy
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
```
Or displayed in a Jupyter Notebook / Google Colab:
```python
from IPython.display import Audio
Audio(output, rate=model.config.sampling_rate)
```
## BibTex citation
This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
```
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
```
## License
The model is licensed as **CC-BY-NC 4.0**.
|
jester6136/multilingual-e5-large-m2v
|
jester6136
| 2024-10-17T09:07:12Z | 106 | 0 |
model2vec
|
[
"model2vec",
"safetensors",
"embeddings",
"static-embeddings",
"multilingual",
"af",
"am",
"ar",
"as",
"az",
"be",
"bg",
"bn",
"br",
"bs",
"ca",
"cs",
"cy",
"da",
"de",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gd",
"gl",
"gu",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"id",
"is",
"it",
"ja",
"jv",
"ka",
"kk",
"km",
"kn",
"ko",
"ku",
"ky",
"la",
"lo",
"lt",
"lv",
"mg",
"mk",
"ml",
"mn",
"mr",
"ms",
"my",
"ne",
"nl",
"no",
"om",
"or",
"pa",
"pl",
"ps",
"pt",
"ro",
"ru",
"sa",
"sd",
"si",
"sk",
"sl",
"so",
"sq",
"sr",
"su",
"sv",
"sw",
"ta",
"te",
"th",
"tl",
"tr",
"ug",
"uk",
"ur",
"uz",
"vi",
"xh",
"yi",
"zh",
"base_model:intfloat/multilingual-e5-large",
"base_model:finetune:intfloat/multilingual-e5-large",
"license:mit",
"region:us"
] | null | 2024-10-16T08:05:19Z |
---
base_model: intfloat/multilingual-e5-large
language:
- multilingual
- af
- am
- ar
- as
- az
- be
- bg
- bn
- br
- bs
- ca
- cs
- cy
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- fy
- ga
- gd
- gl
- gu
- ha
- he
- hi
- hr
- hu
- hy
- id
- is
- it
- ja
- jv
- ka
- kk
- km
- kn
- ko
- ku
- ky
- la
- lo
- lt
- lv
- mg
- mk
- ml
- mn
- mr
- ms
- my
- ne
- nl
- 'no'
- om
- or
- pa
- pl
- ps
- pt
- ro
- ru
- sa
- sd
- si
- sk
- sl
- so
- sq
- sr
- su
- sv
- sw
- ta
- te
- th
- tl
- tr
- ug
- uk
- ur
- uz
- vi
- xh
- yi
- zh
library_name: model2vec
license: mit
model_name: jester6136/multilingual-e5-large-m2v
tags:
- embeddings
- static-embeddings
---
# jester6136/multilingual-e5-large-m2v Model Card
This [Model2Vec](https://github.com/MinishLab/model2vec) model is a distilled version of the [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical.
## Installation
Install using pip:
```
pip install model2vec reach tqdm numpy
```
## Usage
```
import numpy as np
from model2vec import StaticModel
from reach import Reach
from tqdm import tqdm
import time
class TextDeduplicator:
def __init__(self, model_path: str):
# Load the pre-trained model
self.model = StaticModel.from_pretrained(model_path)
def encode_texts(self, texts: list[str]) -> np.ndarray:
# Prepare the texts and encode them into embeddings
texts = [f"query: {text}" for text in texts]
embedding_matrix = self.model.encode(texts, show_progressbar=True)
return embedding_matrix
def deduplicate(self, embedding_matrix: np.ndarray, threshold: float, batch_size: int = 1024
) -> tuple[np.ndarray, dict[int, list[int]]]:
# Deduplicate the texts based on their embeddings
reach = Reach(vectors=embedding_matrix, items=[str(i) for i in range(len(embedding_matrix))])
results = reach.nearest_neighbor_threshold(
embedding_matrix, threshold=threshold, batch_size=batch_size, show_progressbar=True
)
deduplicated_indices = set(range(len(embedding_matrix)))
duplicate_groups = {}
for i, similar_items in enumerate(tqdm(results)):
if i not in deduplicated_indices:
continue
similar_indices = [int(item[0]) for item in similar_items if int(item[0]) != i]
for sim_idx in similar_indices:
if sim_idx in deduplicated_indices:
deduplicated_indices.remove(sim_idx)
if i not in duplicate_groups:
duplicate_groups[i] = []
duplicate_groups[i].append(sim_idx)
return np.array(list(deduplicated_indices)), duplicate_groups
def deduplicate_texts(self, texts: list[str], threshold: float) -> tuple[np.ndarray, dict[int, list[int]]]:
# End-to-end deduplication process
embedding_matrix = self.encode_texts(texts)
return self.deduplicate(embedding_matrix, threshold)
if __name__ == "__main__":
# Example usage
texts = [
"Anh yêu em.",
"Mọi thứ ở công ty mới đều lạ lẫm, nhưng tôi cảm thấy rất sẵn sàng để bắt đầu hành trình mới.",
"Trận đấu bóng đá tối qua rất căng thẳng, hai đội liên tục tấn công và phòng thủ.",
"Một quan chức Fed muốn giảm bớt tốc độ hạ lãi suất",
"Ngày đầu tiên tại công ty mới đầy ấn tượng, tôi hy vọng sẽ nhanh chóng hòa nhập với môi trường làm việc.",
"Mùa hè này, cả gia đình sẽ có một chuyến đi đến Đà Nẵng, nơi mà chúng tôi đã mong chờ từ rất lâu.",
"Gia đình tôi đã lên kế hoạch cho kỳ nghỉ tại Đà Nẵng vào mùa hè này, một chuyến đi mà mọi người đều háo hức.",
"Fed có bước tiến mới để hạ lãi suất",
"Chúng tôi đã dự định từ lâu sẽ đi Đà Nẵng vào mùa hè này, và cả nhà đều rất trông đợi chuyến du lịch.",
"Ngày đầu đi làm thật là thú vị, tuy có chút hồi hộp nhưng tôi mong chờ những điều mới mẻ.",
"Mùa hè năm nay, gia đình tôi sẽ du lịch Đà Nẵng, chuyến đi mà ai cũng mong đợi từ trước."
]
deduplicator = TextDeduplicator("jester6136/multilingual-e5-large-m2v")
start_time = time.time()
deduplicated_indices, duplicate_groups = deduplicator.deduplicate_texts(texts, threshold=0.85)
end_time = time.time()
print(f"Deduplication completed in {end_time - start_time:.2f} seconds")
print(f"Deduped output: {deduplicated_indices}")
print(f"Group dup: {duplicate_groups}")
```
## How it works
Model2vec creates a small, fast, and powerful model that outperforms other static embedding models by a large margin on all tasks we could find, while being much faster to create than traditional static embedding models such as GloVe. Best of all, you don't need any data to distill a model using Model2Vec.
It works by passing a vocabulary through a sentence transformer model, then reducing the dimensionality of the resulting embeddings using PCA, and finally weighting the embeddings using zipf weighting. During inference, we simply take the mean of all token embeddings occurring in a sentence.
## Additional Resources
- [All Model2Vec models on the hub](https://huggingface.co/models?library=model2vec)
- [Model2Vec Repo](https://github.com/MinishLab/model2vec)
- [Model2Vec Results](https://github.com/MinishLab/model2vec?tab=readme-ov-file#results)
- [Model2Vec Tutorials](https://github.com/MinishLab/model2vec/tree/main/tutorials)
## Library Authors
Model2Vec was developed by the [Minish Lab](https://github.com/MinishLab) team consisting of [Stephan Tulkens](https://github.com/stephantul) and [Thomas van Dongen](https://github.com/Pringled).
## Citation
Please cite the [Model2Vec repository](https://github.com/MinishLab/model2vec) if you use this model in your work.
```
@software{minishlab2024model2vec,
authors = {Stephan Tulkens, Thomas van Dongen},
title = {Model2Vec: Turn any Sentence Transformer into a Small Fast Model},
year = {2024},
url = {https://github.com/MinishLab/model2vec},
}
```
|
Dovud-Asadov/SFT-Llama-3.1-70B-for-SPEAKLISH
|
Dovud-Asadov
| 2024-10-17T09:07:00Z | 36 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-10-17T08:58:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/vinallama-7b-history-GGUF
|
mradermacher
| 2024-10-17T09:02:06Z | 56 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:tuanpasg/vinallama-7b-history",
"base_model:quantized:tuanpasg/vinallama-7b-history",
"endpoints_compatible",
"region:us"
] | null | 2024-10-17T08:47:09Z |
---
base_model: tuanpasg/vinallama-7b-history
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/tuanpasg/vinallama-7b-history
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-history-GGUF/resolve/main/vinallama-7b-history.Q2_K.gguf) | Q2_K | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-history-GGUF/resolve/main/vinallama-7b-history.Q3_K_S.gguf) | Q3_K_S | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-history-GGUF/resolve/main/vinallama-7b-history.Q3_K_M.gguf) | Q3_K_M | 3.5 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-history-GGUF/resolve/main/vinallama-7b-history.Q3_K_L.gguf) | Q3_K_L | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-history-GGUF/resolve/main/vinallama-7b-history.IQ4_XS.gguf) | IQ4_XS | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-history-GGUF/resolve/main/vinallama-7b-history.Q4_K_S.gguf) | Q4_K_S | 4.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-history-GGUF/resolve/main/vinallama-7b-history.Q4_K_M.gguf) | Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-history-GGUF/resolve/main/vinallama-7b-history.Q5_K_S.gguf) | Q5_K_S | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-history-GGUF/resolve/main/vinallama-7b-history.Q5_K_M.gguf) | Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-history-GGUF/resolve/main/vinallama-7b-history.Q6_K.gguf) | Q6_K | 5.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-history-GGUF/resolve/main/vinallama-7b-history.Q8_0.gguf) | Q8_0 | 7.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/vinallama-7b-history-GGUF/resolve/main/vinallama-7b-history.f16.gguf) | f16 | 13.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RichardErkhov/NeverSleep_-_Llama-3-Lumimaid-70B-v0.1-gguf
|
RichardErkhov
| 2024-10-17T09:01:07Z | 5 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-16T12:30:58Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Llama-3-Lumimaid-70B-v0.1 - GGUF
- Model creator: https://huggingface.co/NeverSleep/
- Original model: https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [Llama-3-Lumimaid-70B-v0.1.Q2_K.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Llama-3-Lumimaid-70B-v0.1-gguf/blob/main/Llama-3-Lumimaid-70B-v0.1.Q2_K.gguf) | Q2_K | 24.56GB |
| [Llama-3-Lumimaid-70B-v0.1.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Llama-3-Lumimaid-70B-v0.1-gguf/blob/main/Llama-3-Lumimaid-70B-v0.1.IQ3_XS.gguf) | IQ3_XS | 27.29GB |
| [Llama-3-Lumimaid-70B-v0.1.IQ3_S.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Llama-3-Lumimaid-70B-v0.1-gguf/blob/main/Llama-3-Lumimaid-70B-v0.1.IQ3_S.gguf) | IQ3_S | 28.79GB |
| [Llama-3-Lumimaid-70B-v0.1.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Llama-3-Lumimaid-70B-v0.1-gguf/blob/main/Llama-3-Lumimaid-70B-v0.1.Q3_K_S.gguf) | Q3_K_S | 28.79GB |
| [Llama-3-Lumimaid-70B-v0.1.IQ3_M.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Llama-3-Lumimaid-70B-v0.1-gguf/blob/main/Llama-3-Lumimaid-70B-v0.1.IQ3_M.gguf) | IQ3_M | 29.74GB |
| [Llama-3-Lumimaid-70B-v0.1.Q3_K.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Llama-3-Lumimaid-70B-v0.1-gguf/blob/main/Llama-3-Lumimaid-70B-v0.1.Q3_K.gguf) | Q3_K | 31.91GB |
| [Llama-3-Lumimaid-70B-v0.1.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Llama-3-Lumimaid-70B-v0.1-gguf/blob/main/Llama-3-Lumimaid-70B-v0.1.Q3_K_M.gguf) | Q3_K_M | 31.91GB |
| [Llama-3-Lumimaid-70B-v0.1.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Llama-3-Lumimaid-70B-v0.1-gguf/blob/main/Llama-3-Lumimaid-70B-v0.1.Q3_K_L.gguf) | Q3_K_L | 34.59GB |
| [Llama-3-Lumimaid-70B-v0.1.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Llama-3-Lumimaid-70B-v0.1-gguf/blob/main/Llama-3-Lumimaid-70B-v0.1.IQ4_XS.gguf) | IQ4_XS | 35.64GB |
| [Llama-3-Lumimaid-70B-v0.1.Q4_0.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Llama-3-Lumimaid-70B-v0.1-gguf/blob/main/Llama-3-Lumimaid-70B-v0.1.Q4_0.gguf) | Q4_0 | 37.22GB |
| [Llama-3-Lumimaid-70B-v0.1.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Llama-3-Lumimaid-70B-v0.1-gguf/tree/main/) | IQ4_NL | 37.58GB |
| [Llama-3-Lumimaid-70B-v0.1.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Llama-3-Lumimaid-70B-v0.1-gguf/tree/main/) | Q4_K_S | 37.58GB |
| [Llama-3-Lumimaid-70B-v0.1.Q4_K.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Llama-3-Lumimaid-70B-v0.1-gguf/tree/main/) | Q4_K | 39.6GB |
| [Llama-3-Lumimaid-70B-v0.1.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Llama-3-Lumimaid-70B-v0.1-gguf/tree/main/) | Q4_K_M | 39.6GB |
| [Llama-3-Lumimaid-70B-v0.1.Q4_1.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Llama-3-Lumimaid-70B-v0.1-gguf/tree/main/) | Q4_1 | 41.27GB |
| [Llama-3-Lumimaid-70B-v0.1.Q5_0.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Llama-3-Lumimaid-70B-v0.1-gguf/tree/main/) | Q5_0 | 45.32GB |
| [Llama-3-Lumimaid-70B-v0.1.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Llama-3-Lumimaid-70B-v0.1-gguf/tree/main/) | Q5_K_S | 45.32GB |
| [Llama-3-Lumimaid-70B-v0.1.Q5_K.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Llama-3-Lumimaid-70B-v0.1-gguf/tree/main/) | Q5_K | 46.52GB |
| [Llama-3-Lumimaid-70B-v0.1.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Llama-3-Lumimaid-70B-v0.1-gguf/tree/main/) | Q5_K_M | 46.52GB |
| [Llama-3-Lumimaid-70B-v0.1.Q5_1.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Llama-3-Lumimaid-70B-v0.1-gguf/tree/main/) | Q5_1 | 49.36GB |
| [Llama-3-Lumimaid-70B-v0.1.Q6_K.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Llama-3-Lumimaid-70B-v0.1-gguf/tree/main/) | Q6_K | 53.91GB |
| [Llama-3-Lumimaid-70B-v0.1.Q8_0.gguf](https://huggingface.co/RichardErkhov/NeverSleep_-_Llama-3-Lumimaid-70B-v0.1-gguf/tree/main/) | Q8_0 | 69.83GB |
Original model description:
---
license: cc-by-nc-4.0
tags:
- not-for-all-audiences
- nsfw
---
## Lumimaid 0.1
<center><div style="width: 100%;">
<img src="https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/d3QMaxy3peFTpSlWdWF-k.png" style="display: block; margin: auto;">
</div></center>
This model uses the Llama3 **prompting format**
Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.
We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.
This model includes the new Luminae dataset from Ikari.
If you consider trying this model please give us some feedback either on the Community tab on hf or on our [Discord Server](https://discord.gg/MtCVRWTZXY).
## Credits:
- Undi
- IkariDev
## Description
This repo contains FP16 files of Lumimaid-70B-v0.1.
Switch: [8B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1) - [70B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1) - [70B-alt](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-alt) - [8B-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS) - [70B-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-OAS)
## Training data used:
- [Aesir datasets](https://huggingface.co/MinervaAI)
- [NoRobots](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt)
- [limarp](https://huggingface.co/datasets/lemonilia/LimaRP) - 8k ctx
- [toxic-dpo-v0.1-sharegpt](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt)
- [ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal)
- Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset
- [Squish42/bluemoon-fandom-1-1-rp-cleaned](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned) - 50% (randomly)
- [NobodyExistsOnTheInternet/PIPPAsharegptv2test](https://huggingface.co/datasets/NobodyExistsOnTheInternet/PIPPAsharegptv2test) - 5% (randomly)
- [cgato/SlimOrcaDedupCleaned](https://huggingface.co/datasets/cgato/SlimOrcaDedupCleaned) - 5% (randomly)
- Airoboros (reduced)
- [Capybara](https://huggingface.co/datasets/Undi95/Capybara-ShareGPT/) (reduced)
## Models used (only for 8B)
- Initial LumiMaid 8B Finetune
- Undi95/Llama-3-Unholy-8B-e4
- Undi95/Llama-3-LewdPlay-8B
## Prompt template: Llama3
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
```
## Others
Undi: If you want to support us, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
|
Joseph2142/ppo-Huggy
|
Joseph2142
| 2024-10-17T08:58:48Z | 6 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2024-10-17T08:58:42Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Joseph2142/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
AlexKoff88/opt-125m-openvino-4bit
|
AlexKoff88
| 2024-10-17T08:57:59Z | 9 | 0 | null |
[
"openvino",
"opt",
"text-generation",
"nncf",
"4-bit",
"en",
"base_model:facebook/opt-125m",
"base_model:finetune:facebook/opt-125m",
"license:other",
"region:us"
] |
text-generation
| 2024-10-17T08:00:23Z |
---
base_model: facebook/opt-125m
language: en
license: other
tags:
- text-generation
- opt
- openvino
- nncf
- 4-bit
inference: false
commercial: false
---
This model is a quantized version of [`facebook/opt-125m`](https://huggingface.co/facebook/opt-125m) and is converted to the OpenVINO format. This model was obtained via the [nncf-quantization](https://huggingface.co/spaces/echarlaix/nncf-quantization) space with [optimum-intel](https://github.com/huggingface/optimum-intel).
First make sure you have `optimum-intel` installed:
```bash
pip install optimum[openvino]
```
To load your model you can do as follows:
```python
from optimum.intel import OVModelForCausalLM
model_id = "AlexKoff88/opt-125m-openvino-4bit"
model = OVModelForCausalLM.from_pretrained(model_id)
```
|
mradermacher/Qwen2.5-Coder-1.5B-CodeFIM-GGUF
|
mradermacher
| 2024-10-17T08:49:08Z | 420 | 2 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Etherll/Qwen2.5-Coder-1.5B-CodeFIM",
"base_model:quantized:Etherll/Qwen2.5-Coder-1.5B-CodeFIM",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-17T08:45:37Z |
---
base_model: Etherll/Qwen2.5-Coder-1.5B-CodeFIM
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Etherll/Qwen2.5-Coder-1.5B-CodeFIM
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-CodeFIM-GGUF/resolve/main/Qwen2.5-Coder-1.5B-CodeFIM.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-CodeFIM-GGUF/resolve/main/Qwen2.5-Coder-1.5B-CodeFIM.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-CodeFIM-GGUF/resolve/main/Qwen2.5-Coder-1.5B-CodeFIM.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-CodeFIM-GGUF/resolve/main/Qwen2.5-Coder-1.5B-CodeFIM.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-CodeFIM-GGUF/resolve/main/Qwen2.5-Coder-1.5B-CodeFIM.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-CodeFIM-GGUF/resolve/main/Qwen2.5-Coder-1.5B-CodeFIM.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-CodeFIM-GGUF/resolve/main/Qwen2.5-Coder-1.5B-CodeFIM.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-CodeFIM-GGUF/resolve/main/Qwen2.5-Coder-1.5B-CodeFIM.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-CodeFIM-GGUF/resolve/main/Qwen2.5-Coder-1.5B-CodeFIM.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-CodeFIM-GGUF/resolve/main/Qwen2.5-Coder-1.5B-CodeFIM.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-CodeFIM-GGUF/resolve/main/Qwen2.5-Coder-1.5B-CodeFIM.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-1.5B-CodeFIM-GGUF/resolve/main/Qwen2.5-Coder-1.5B-CodeFIM.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
marziye-A/whisper-large-v3-full-youtube_80hour_7
|
marziye-A
| 2024-10-17T08:43:59Z | 21 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"fa",
"dataset:mozilla-foundation/common_voice_15_0",
"base_model:openai/whisper-large",
"base_model:finetune:openai/whisper-large",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2024-09-22T13:15:10Z |
---
library_name: transformers
language:
- fa
base_model: openai/whisper-large
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_15_0
metrics:
- wer
model-index:
- name: Whisper large fa - marziye-A
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 15.0
type: mozilla-foundation/common_voice_15_0
config: fa
split: None
args: 'config: fa, split: test'
metrics:
- name: Wer
type: wer
value: 19.74175831429967
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper large fa - marziye-A
This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the Common Voice 15.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1571
- Wer: 19.7418
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:-------:|
| 0.2189 | 0.1567 | 2000 | 0.2248 | 29.0575 |
| 0.1972 | 0.3134 | 4000 | 0.2035 | 25.1376 |
| 0.1906 | 0.4701 | 6000 | 0.1923 | 25.7159 |
| 0.1595 | 0.6268 | 8000 | 0.1806 | 22.4166 |
| 0.1747 | 0.7835 | 10000 | 0.1753 | 23.0041 |
| 0.1744 | 0.9402 | 12000 | 0.1709 | 22.4932 |
| 0.1357 | 1.0969 | 14000 | 0.1687 | 20.7782 |
| 0.1345 | 1.2536 | 16000 | 0.1646 | 21.3221 |
| 0.1362 | 1.4103 | 18000 | 0.1619 | 21.1082 |
| 0.121 | 1.5670 | 20000 | 0.1601 | 20.3781 |
| 0.1354 | 1.7237 | 22000 | 0.1587 | 19.8157 |
| 0.122 | 1.8804 | 24000 | 0.1571 | 19.7418 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1
|
Triangle104/Mistral-Small-Drummer-22B-Q4_K_M-GGUF
|
Triangle104
| 2024-10-17T08:39:05Z | 19 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:jondurbin/gutenberg-dpo-v0.1",
"dataset:nbeerbower/gutenberg2-dpo",
"base_model:nbeerbower/Mistral-Small-Drummer-22B",
"base_model:quantized:nbeerbower/Mistral-Small-Drummer-22B",
"license:other",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-17T08:22:41Z |
---
base_model: nbeerbower/Mistral-Small-Drummer-22B
datasets:
- jondurbin/gutenberg-dpo-v0.1
- nbeerbower/gutenberg2-dpo
library_name: transformers
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
tags:
- llama-cpp
- gguf-my-repo
model-index:
- name: Mistral-Small-Drummer-22B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 63.31
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mistral-Small-Drummer-22B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 40.12
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mistral-Small-Drummer-22B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 16.69
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mistral-Small-Drummer-22B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 12.42
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mistral-Small-Drummer-22B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 9.8
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mistral-Small-Drummer-22B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 34.39
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Mistral-Small-Drummer-22B
name: Open LLM Leaderboard
---
# Triangle104/Mistral-Small-Drummer-22B-Q4_K_M-GGUF
This model was converted to GGUF format from [`nbeerbower/Mistral-Small-Drummer-22B`](https://huggingface.co/nbeerbower/Mistral-Small-Drummer-22B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nbeerbower/Mistral-Small-Drummer-22B) for more details on the model.
---
Model details:
-
mistralai/Mistral-Small-Instruct-2409 finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.
Method
ORPO tuned with 2xA40 on RunPod for 1 epoch.
learning_rate=4e-6,
lr_scheduler_type="linear",
beta=0.1,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
gradient_accumulation_steps=8,
optim="paged_adamw_8bit",
num_train_epochs=1,
Dataset was prepared using Mistral-Small Instruct format.
Fine-tune Llama 3 with ORPO
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric Value
Avg. 29.45
IFEval (0-Shot) 63.31
BBH (3-Shot) 40.12
MATH Lvl 5 (4-Shot) 16.69
GPQA (0-shot) 12.42
MuSR (0-shot) 9.80
MMLU-PRO (5-shot) 34.39
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Mistral-Small-Drummer-22B-Q4_K_M-GGUF --hf-file mistral-small-drummer-22b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Mistral-Small-Drummer-22B-Q4_K_M-GGUF --hf-file mistral-small-drummer-22b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Mistral-Small-Drummer-22B-Q4_K_M-GGUF --hf-file mistral-small-drummer-22b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Mistral-Small-Drummer-22B-Q4_K_M-GGUF --hf-file mistral-small-drummer-22b-q4_k_m.gguf -c 2048
```
|
hugging-quants/gemma-2-9b-it-AWQ-INT4
|
hugging-quants
| 2024-10-17T08:31:37Z | 1,287 | 6 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"google",
"autoawq",
"conversational",
"en",
"base_model:google/gemma-2-9b-it",
"base_model:quantized:google/gemma-2-9b-it",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] |
text-generation
| 2024-10-15T15:17:54Z |
---
base_model: google/gemma-2-9b-it
license: gemma
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- gemma2
- google
- autoawq
---
> [!IMPORTANT]
> This repository is a community-driven quantized version of the original model [`google/gemma-2-9b-it`](https://huggingface.co/google/gemma-2-9b-it) which is the BF16 half-precision official version released by Google.
> [!WARNING]
> This model has been quantized using `transformers` 4.45.0, meaning that the tokenizer available in this repository won't be compatible with lower versions. Same applies for e.g. Text Generation Inference (TGI) that only installs `transformers` 4.45.0 or higher starting in v2.3.1.
## Model Information
Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. They are text-to-text, decoder-only large language models, available in English, with open weights for both pre-trained variants and instruction-tuned variants. Gemma models are well-suited for a variety of text generation tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as a laptop, desktop or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone.
This repository contains [`google/gemma-2-9b-it`](https://huggingface.co/google/gemma-2-9b-it) quantized using [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) from FP16 down to INT4 using the GEMM kernels performing zero-point quantization with a group size of 128.
## Model Usage
> [!NOTE]
> In order to run the inference with Gemma2 9B Instruct AWQ in INT4, around 6 GiB of VRAM are needed only for loading the model checkpoint, without including the KV cache or the CUDA graphs, meaning that there should be a bit over that VRAM available.
In order to use the current quantized model, support is offered for different solutions as `transformers`, `autoawq`, or `text-generation-inference`.
### 🤗 Transformers
In order to run the inference with Gemma2 9B Instruct AWQ in INT4, you need to install the following packages:
```bash
pip install -q --upgrade "transformers>=4.45.0" accelerate
INSTALL_KERNELS=1 pip install -q git+https://github.com/casper-hansen/AutoAWQ.git@79547665bdb27768a9b392ef375776b020acbf0c
```
To run the inference on top of Gemma2 9B Instruct AWQ in INT4 precision, the AWQ model can be instantiated as any other causal language modeling model via `AutoModelForCausalLM` and run the inference normally.
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, AwqConfig
model_id = "hugging-quants/gemma-2-9b-it-AWQ-INT4"
quantization_config = AwqConfig(
bits=4,
fuse_max_seq_len=512, # Note: Update this as per your use-case
do_fuse=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
device_map="auto",
quantization_config=quantization_config
)
prompt = [
{"role": "user", "content": "What's Deep Learning?"},
]
inputs = tokenizer.apply_chat_template(
prompt,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
).to("cuda")
outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256)
print(tokenizer.batch_decode(outputs[:, inputs['input_ids'].shape[1]:], skip_special_tokens=True)[0])
```
### AutoAWQ
In order to run the inference with Gemma2 9B Instruct AWQ in INT4, you need to install the following packages:
```bash
pip install -q --upgrade "transformers>=4.45.0" accelerate
INSTALL_KERNELS=1 pip install -q git+https://github.com/casper-hansen/AutoAWQ.git@79547665bdb27768a9b392ef375776b020acbf0c
```
Alternatively, one may want to run that via `AutoAWQ` even though it's built on top of 🤗 `transformers`, which is the recommended approach instead as described above.
```python
import torch
from awq import AutoAWQForCausalLM
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "hugging-quants/gemma-2-9b-it-AWQ-INT4"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoAWQForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
device_map="auto",
)
prompt = [
{"role": "user", "content": "What's Deep Learning?"},
]
inputs = tokenizer.apply_chat_template(
prompt,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
).to("cuda")
outputs = model.generate(**inputs, do_sample=True, max_new_tokens=256)
print(tokenizer.batch_decode(outputs[:, inputs['input_ids'].shape[1]:], skip_special_tokens=True)[0])
```
The AutoAWQ script has been adapted from [`AutoAWQ/examples/generate.py`](https://github.com/casper-hansen/AutoAWQ/blob/main/examples/generate.py).
### 🤗 Text Generation Inference (TGI)
To run the `text-generation-launcher` with Gemma2 9B Instruct AWQ in INT4 with Marlin kernels for optimized inference speed, you will need to have Docker installed (see [installation notes](https://docs.docker.com/engine/install/)).
Then you just need to run the TGI v2.3.0 (or higher) Docker container as follows:
```bash
docker run --gpus all --shm-size 1g -ti -p 8080:80 \
-v hf_cache:/data \
-e MODEL_ID=hugging-quants/gemma-2-9b-it-AWQ-INT4 \
-e QUANTIZE=awq \
-e MAX_INPUT_LENGTH=4000 \
-e MAX_TOTAL_TOKENS=4096 \
ghcr.io/huggingface/text-generation-inference:2.3.0
```
> [!NOTE]
> TGI will expose different endpoints, to see all the endpoints available check [TGI OpenAPI Specification](https://huggingface.github.io/text-generation-inference/#/).
To send request to the deployed TGI endpoint compatible with [OpenAI OpenAPI specification](https://github.com/openai/openai-openapi) i.e. `/v1/chat/completions`:
```bash
curl 0.0.0.0:8080/v1/chat/completions \
-X POST \
-H 'Content-Type: application/json' \
-d '{
"model": "tgi",
"messages": [
{
"role": "user",
"content": "What is Deep Learning?"
}
],
"max_tokens": 128
}'
```
Or programatically via the `huggingface_hub` Python client as follows:
```python
import os
from huggingface_hub import InferenceClient
client = InferenceClient(base_url="http://0.0.0.0:8080", api_key="-")
chat_completion = client.chat.completions.create(
model="hugging-quants/gemma-2-9b-it-AWQ-INT4",
messages=[
{"role": "user", "content": "What is Deep Learning?"},
],
max_tokens=128,
)
```
Alternatively, the OpenAI Python client can also be used (see [installation notes](https://github.com/openai/openai-python?tab=readme-ov-file#installation)) as follows:
```python
import os
from openai import OpenAI
client = OpenAI(base_url="http://0.0.0.0:8080/v1", api_key="-")
chat_completion = client.chat.completions.create(
model="tgi",
messages=[
{"role": "user", "content": "What is Deep Learning?"},
],
max_tokens=128,
)
```
### vLLM
To run vLLM with Gemma2 9B Instruct AWQ in INT4, you will need to have Docker installed (see [installation notes](https://docs.docker.com/engine/install/)) and run the latest vLLM Docker container as follows:
```bash
docker run --runtime nvidia --gpus all --ipc=host -p 8000:8000 \
-v hf_cache:/root/.cache/huggingface \
vllm/vllm-openai:latest \
--model hugging-quants/gemma-2-9b-it-AWQ-INT4 \
--max-model-len 4096
```
To send request to the deployed vLLM endpoint compatible with [OpenAI OpenAPI specification](https://github.com/openai/openai-openapi) i.e. `/v1/chat/completions`:
```bash
curl 0.0.0.0:8000/v1/chat/completions \
-X POST \
-H 'Content-Type: application/json' \
-d '{
"model": "hugging-quants/gemma-2-9b-it-AWQ-INT4",
"messages": [
{
"role": "user",
"content": "What is Deep Learning?"
}
],
"max_tokens": 128
}'
```
Or programatically via the `openai` Python client (see [installation notes](https://github.com/openai/openai-python?tab=readme-ov-file#installation)) as follows:
```python
import os
from openai import OpenAI
client = OpenAI(base_url="http://0.0.0.0:8000/v1", api_key=os.getenv("VLLM_API_KEY", "-"))
chat_completion = client.chat.completions.create(
model="hugging-quants/gemma-2-9b-it-AWQ-INT4",
messages=[
{"role": "user", "content": "What is Deep Learning?"},
],
max_tokens=128,
)
```
## Quantization Reproduction
> [!IMPORTANT]
> In order to quantize Gemma2 9B Instruct using AutoAWQ, you will need to use an instance with at least enough CPU RAM to fit the whole model i.e. ~20GiB, and an NVIDIA GPU with 16GiB of VRAM to quantize it.
>
> Additionally, you also need to accept the Gemma2 access conditions, as it is a gated model that requires accepting those first.
In order to quantize Gemma2 9B Instruct, first install the following packages:
```bash
pip install -q --upgrade "torch==2.3.0" "transformers>=4.45.0" accelerate
INSTALL_KERNELS=1 pip install -q git+https://github.com/casper-hansen/AutoAWQ.git@79547665bdb27768a9b392ef375776b020acbf0c
```
Then you need to install the `huggingface_hub` Python SDK and login to the Hugging Face Hub.
```bash
pip install -q --upgrade huggingface_hub
huggingface-cli login
```
Then run the following script, adapted from [`AutoAWQ/examples/quantize.py`](https://github.com/casper-hansen/AutoAWQ/blob/main/examples/quantize.py):
```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
model_path = "google/gemma-2-9b-it"
quant_path = "hugging-quants/gemma-2-9b-it-AWQ-INT4"
quant_config = {
"zero_point": True,
"q_group_size": 128,
"w_bit": 4,
"version": "GEMM",
}
# Load model
model = AutoAWQForCausalLM.from_pretrained(
model_path, low_cpu_mem_usage=True, use_cache=False,
)
tokenizer = AutoTokenizer.from_pretrained(model_path)
# Quantize
model.quantize(tokenizer, quant_config=quant_config)
# Save quantized model
model.save_quantized(quant_path)
tokenizer.save_pretrained(quant_path)
print(f'Model is quantized and saved at "{quant_path}"')
```
|
RamsesDIIP/me5-large-construction-adapter-v3
|
RamsesDIIP
| 2024-10-17T08:28:15Z | 7 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4236",
"loss:TripletLoss",
"multilingual",
"arxiv:1908.10084",
"arxiv:1703.07737",
"base_model:intfloat/multilingual-e5-large",
"base_model:finetune:intfloat/multilingual-e5-large",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-10-17T08:26:21Z |
---
base_model: intfloat/multilingual-e5-large
language:
- multilingual
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy
- dot_accuracy
- manhattan_accuracy
- euclidean_accuracy
- max_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:4236
- loss:TripletLoss
widget:
- source_sentence: 'query: Hormigonado de muros de contención (CE, EHE), de 3 m de
altura como máximo, con hormigón en masa HM - 35 / B / 20 / XA3 con una cantidad
de cemento de 325 kg/m3 i relación agua cemento =< 0.45 y vertido con cubilote'
sentences:
- 'passage: Caja para interceptor de 80x50 cm, construida con paredes de 30 cm de
espesor de ladrillo hueco, revestida y alisada en su interior con mortero de cemento
1:4, sobre una base de 15 cm de hormigón estructural HM - 20 / B / 20 / X0 con
una dosificación de cemento de 200 kg/m3 y relación agua-cemento <= 0.6, en un
entorno urbano con accesibilidad adecuada, en aceras de más de 3 y hasta 5 m de
ancho o calzada/plataforma única de más de 7 y hasta 12 m de ancho, considerando
la interferencia de servicios o elementos de mobiliario urbano, en intervenciones
de hasta 1 m.'
- 'passage: Hormigonado de muros de soporte (CE, EHE), de 4 m de altura como mínimo,
con hormigón estructural H - 30 / B / 25 / XA2 con una cantidad de cemento de
300 kg/m3 y relación agua cemento =< 0.50 y vertido con bomba de hormigón.'
- 'passage: Colocación de muros de contención de hasta 3 m de altura, utilizando
hormigón en masa HM - 35 / B / 20 / XA3, con una dosificación de cemento de 325
kg/m3 y una relación agua-cemento menor o igual a 0.45, vertido mediante cubilote.'
- source_sentence: 'query: Derivación con ramal a 45° de polietileno, diámetro nominal
DN 400, diámetro ramal DN 400, conexión macho-hembra, de superficies interna lisa
y externa perfilada, de fabricación manipulada según norma UNE-EN 13476-3, apta
para tubo de saneamiento enterrado sin presión de superficies interna lisa y externa
perfilada según norma UNE-EN 13476-3, para unión elástica con anilla elastomérica
de estanqueidad, colocado sobre lecho de arena de 15 cm de espesor, incluído el
relleno del apoyo, con pisón vibrante eléctrico'
sentences:
- 'passage: Tobera ajustable manualmente para montaje en el extremo de un conducto
circular de 160 mm de diámetro de conexión y 80 mm de diámetro de salida, fabricada
en aluminio pintado en color estándar, instalada.'
- 'passage: Derivación con ramal a 30° de polietileno, diámetro nominal DN 500,
diámetro ramal DN 300, conexión soldada, de superficies interna rugosa y externa
lisa, de fabricación estándar según norma UNE-EN 1452-2, apta para tubo de desagüe
superficial sin presión de superficies interna rugosa y externa lisa según norma
UNE-EN 1452-2, para unión rígida con junta de cemento, colocado sobre lecho de
grava de 20 cm de espesor, excluido el relleno del apoyo, con compactador manual.'
- 'passage: Derivación con ramal a 45° de polietileno, diámetro nominal DN 400,
conexión macho-hembra, con superficies internas lisas y externas perfiladas, fabricada
conforme a la norma UNE-EN 13476-3, adecuada para sistemas de saneamiento enterrados
sin presión, unida mediante anilla elastomérica de estanqueidad, instalada sobre
un lecho de arena de 15 cm de espesor, incluyendo el relleno del soporte, utilizando
un pisón vibrante eléctrico.'
- source_sentence: 'query: Pared divisoria para interior de panel de madera contralaminada
de 80 mm de espesor formada por 3 capas de madera de abeto C24, encoladas con
adhesivo sin urea-formaldehído con la disposición transversal de la madera en
las dos caras del panel, sin tratmiento hidrófugo, con acabado superficial tipo
vivienda en las dos caras con madera de abeto rojo barnizado en una cara y con
madera de abeto rojo barnizado en la otra en la otra colocado con fijaciones mecánicas,
desolidarización del soporte con banda resiliente de caucho EPDM extruido, fijada
con grapas; unión entre paneles machihembrado fijados con tornillos de acero y
sellado de la cara interior de los juntas con cinta adhesiva de goma butílica,
con armadura de poliéster y sellado de la cara exterior con cinta autoadhesiva
de polietileno con adhesivo acrílico sin disolventes, con armadura de polietileno
y película de separación de papel siliconado, previa aplicación de imprimación
incolora a base de una dispersión acrílica sin disolventes; resolución de trabas
con tornillos de acero; fijación de paneles con elementos de acero galvanizado'
sentences:
- 'passage: Pared divisoria para interior de panel de yeso laminado de 100 mm de
espesor formada por 2 capas de yeso, encoladas con adhesivo a base de agua, con
la disposición vertical del yeso en las dos caras del panel, con tratamiento hidrófugo,
con acabado superficial tipo industrial en las dos caras con pintura acrílica
en una cara y con pintura epóxica en la otra, colocado con fijaciones químicas,
desolidarización del soporte con banda de espuma de poliuretano, fijada con adhesivo;
unión entre paneles con sistema de encastre fijados con anclajes de plástico y
sellado de la cara interior de las juntas con masilla acrílica, con refuerzo de
fibra de vidrio y sellado de la cara exterior con cinta autoadhesiva de aluminio,
con armadura de aluminio y película de separación de papel kraft, previa aplicación
de imprimación colorida a base de una dispersión acrílica; resolución de trabas
con anclajes de plástico; fijación de paneles con elementos de acero inoxidable.'
- 'passage: Instalación de un sistema de dos ascensores en configuración de descenso
combinado, sin sala de máquinas, cada uno equipado con un sistema de tracción
directa y un perfil de aceleración y desaceleración suave, operando a una velocidad
de 1 m/s, diseñado para un uso moderado, con capacidad para 6 personas (carga
máxima de 480 kg), 11 paradas (recorrido total de 30 m), cabina de calidad estándar
con dimensiones de 1250x1000 mm, acceso doble a 90º con puertas automáticas de
tres hojas de acero inoxidable de 800x2000 mm, y puertas de acceso automáticas
de tres hojas pintadas de calidad estándar de 800x2000 mm, cumpliendo con la normativa
CE según el REAL DECRETO 203/2016.'
- 'passage: Pared interior de panel de madera contralaminada de 80 mm de grosor
compuesta por tres capas de madera de abeto C24, unidas con adhesivo libre de
urea-formaldehído, con la disposición de la madera en sentido transversal en ambas
caras, sin tratamiento hidrófugo, y con un acabado de vivienda en ambas caras
utilizando madera de abeto rojo barnizada, instalada con fijaciones mecánicas
y desolidarización del soporte mediante banda resiliente de caucho EPDM, fijada
con grapas; unión entre paneles mediante machihembrado y tornillos de acero, sellando
las juntas interiores con cinta de goma butílica y la cara exterior con cinta
autoadhesiva de polietileno con adhesivo acrílico sin disolventes, además de aplicar
una imprimación acrílica incolora antes de la instalación; resolución de trabas
con tornillos de acero y fijación de paneles con elementos de acero galvanizado.'
- source_sentence: 'query: Pavimento de losa de hormigón para pavimentos de 60x40
cm y 6 cm de espesor, de forma rectangular, textura pétrea lisa, precio superior,
colocados con mortero de cemento 1:6, en entorno urbano con dificultad de mobilidad,
en aceras <= 3 m de ancho o calzada/plataforma única <= 7 m de ancho, sin afectación
por servicios o elementos de mobiliario urbano, en actuaciones de 1 a 10 m2'
sentences:
- 'passage: Losas de concreto de 60x40 cm y 6 cm de grosor, con acabado liso y textura
pétrea, instaladas con mortero de cemento en proporción 1:6, adecuadas para áreas
urbanas con acceso limitado, en aceras de hasta 3 m de ancho o plataformas de
hasta 7 m de ancho, sin interferencias de servicios o mobiliario urbano, en proyectos
de entre 1 y 10 m2.'
- 'passage: Pavimento de losa de cerámica para pavimentos de 60x40 cm y 6 cm de
espesor, de forma cuadrada, textura rugosa, precio inferior, colocados con adhesivo
especial 1:4, en entorno rural con fácil acceso, en caminos <= 3 m de ancho o
sendero/plataforma única <= 7 m de ancho, con afectación por servicios o elementos
de jardinería, en actuaciones de 1 a 20 m2.'
- 'passage: Vertido de dinteles utilizando hormigón autocompactante con aditivo
hidrófugo HP - 40 / AC / 10 / XC2, con una dosificación de cemento de 350 kg/m3
y una relación agua-cemento menor o igual a 0.45, realizado con cubilote.'
- source_sentence: 'query: Arco circular estructural a sardinel, de espesor 24 cm
y 24 cm de anchura, de ladrillo perforado R-10, de 240x115x70 mm, para revestir,
categoría I, HD, según la norma UNE-EN 771-1, colocado con mortero cemento 1:3'
sentences:
- 'passage: Ventana de aluminio anodizado en acabado natural, instalada sobre un
premarco, con dos hojas deslizantes y una hoja fija lateral o central, diseñada
para un hueco de obra de aproximadamente 180x120 cm, fabricada con perfiles de
alta calidad, con una clasificación mínima de 3 en permeabilidad al aire según
UNE-EN 12207, clasificación mínima 7A en estanqueidad al agua según UNE-EN 12208
y clasificación mínima C3 en resistencia al viento según UNE-EN 12210, sin sistema
de persiana.'
- 'passage: Arco estructural de forma circular, con un espesor de 24 cm y una anchura
de 24 cm, fabricado con ladrillo perforado R-10 de dimensiones 240x115x70 mm,
destinado a revestimiento, categoría I, HD, conforme a la norma UNE-EN 771-1,
instalado con mortero de cemento en proporción 1:3.'
- 'passage: Arco semicircular decorativo a pie de muro, de espesor 30 cm y 20 cm
de anchura, de ladrillo macizo R-15, de 300x150x75 mm, para acabado, categoría
II, LD, según la norma UNE-EN 771-2, colocado con mortero cal 1:4.'
model-index:
- name: Multilingual E5 Large with Linear Adapter for Construction Terms
results:
- task:
type: triplet
name: Triplet
dataset:
name: validation set
type: validation-set
metrics:
- type: cosine_accuracy
value: 0.996219281663516
name: Cosine Accuracy
- type: dot_accuracy
value: 0.003780718336483932
name: Dot Accuracy
- type: manhattan_accuracy
value: 0.996219281663516
name: Manhattan Accuracy
- type: euclidean_accuracy
value: 0.996219281663516
name: Euclidean Accuracy
- type: max_accuracy
value: 0.996219281663516
name: Max Accuracy
---
# Multilingual E5 Large with Linear Adapter for Construction Terms
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) <!-- at revision ab10c1a7f42e74530fe7ae5be82e6d4f11a719eb -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 tokens
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
- **Language:** multilingual
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
(linear_adapter): MyLinearAdapter(
(linear): Linear(in_features=1024, out_features=1024, bias=True)
)
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("RamsesDIIP/me5-large-construction-adapter-v3")
# Run inference
sentences = [
'query: Arco circular estructural a sardinel, de espesor 24 cm y 24 cm de anchura, de ladrillo perforado R-10, de 240x115x70 mm, para revestir, categoría I, HD, según la norma UNE-EN 771-1, colocado con mortero cemento 1:3',
'passage: Arco estructural de forma circular, con un espesor de 24 cm y una anchura de 24 cm, fabricado con ladrillo perforado R-10 de dimensiones 240x115x70 mm, destinado a revestimiento, categoría I, HD, conforme a la norma UNE-EN 771-1, instalado con mortero de cemento en proporción 1:3.',
'passage: Arco semicircular decorativo a pie de muro, de espesor 30 cm y 20 cm de anchura, de ladrillo macizo R-15, de 300x150x75 mm, para acabado, categoría II, LD, según la norma UNE-EN 771-2, colocado con mortero cal 1:4.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Dataset: `validation-set`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | Value |
|:-------------------|:-----------|
| cosine_accuracy | 0.9962 |
| dot_accuracy | 0.0038 |
| manhattan_accuracy | 0.9962 |
| euclidean_accuracy | 0.9962 |
| **max_accuracy** | **0.9962** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 4,236 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 28 tokens</li><li>mean: 108.53 tokens</li><li>max: 320 tokens</li></ul> | <ul><li>min: 24 tokens</li><li>mean: 105.47 tokens</li><li>max: 287 tokens</li></ul> | <ul><li>min: 21 tokens</li><li>mean: 97.96 tokens</li><li>max: 304 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>query: Losa aligerada de hormigón armado hormigón para armar con aditivo hidrófugo HA - 30 / F / 10 / XC2 con una cantidad de cemento de 300 kg/m3 i relación agua cemento =< 0.6 de 40 cm de canto, con capa superior e inferior de hormigón de 7.5/7,5 cm, armado de la capa inferior con malla electrosoldada de barras corrugadas de acero B500SD, ME 20x20 cm D:8-8 mm, y armado de la capa superior con malla electrosoldada de barras corrugadas de acero B500SD, ME 20x20 cm D:6-6 mm, con una cuantía de 0,45 m2/m2 de aligeradores de poliestireno expandido, interejes de 120 cm, nervios de 40 cm de espesor, armados con 30 kg/m2 de acero en barras corrugadas, utilizando encofrado para dejar el hormigón visto de <= 5 m de altura, hormigonado con bomba</code> | <code>passage: Losa aligerada de concreto reforzado con aditivo impermeabilizante HA - 30 / F / 10 / XC2, con una dosificación de cemento de 300 kg/m3 y una relación agua-cemento menor o igual a 0.6, de 40 cm de espesor, con capas superior e inferior de concreto de 7.5 cm cada una, reforzada en la capa inferior con malla electrosoldada de acero B500SD, ME 20x20 cm D:8-8 mm, y en la capa superior con malla electrosoldada de acero B500SD, ME 20x20 cm D:6-6 mm, incorporando 0,45 m2/m2 de poliestireno expandido como aligerante, con nervios de 40 cm de grosor, reforzados con 30 kg/m2 de acero en varillas corrugadas, utilizando encofrado para un acabado de hormigón expuesto de hasta 5 m de altura, vertido con bomba.</code> | <code>passage: Losa de concreto convencional con aditivo retardante para fraguado, con una cantidad de cemento de 350 kg/m3 y relación agua-cemento =< 0.5 de 30 cm de canto, con capa superior e inferior de concreto de 10/10 cm, armado de la capa inferior con malla de alambre de acero B500S, ME 15x15 cm D:10-10 mm, y armado de la capa superior con malla de alambre de acero B500S, ME 15x15 cm D:8-8 mm, con una cuantía de 0,50 m2/m2 de aligeradores de poliestireno extruido, interejes de 100 cm, nervios de 30 cm de espesor, armados con 25 kg/m2 de acero en barras lisas, utilizando encofrado para dejar el concreto cubierto de <= 4 m de altura, hormigonado manualmente.</code> |
| <code>query: Base de hormigón (CE, EHE) hormigón HM-20/S / 40 / I de consistencia seca, tamaño máximo del árido 40 mm, con >= 200 kg/m3 de cemento, apto para clase de exposición I, vertido con transporte interior mecánico con extendido y vibrado manual, con acabado maestreado, en entorno urbano sin dificultad de movilidad, en aceras > 3 y <= 5 m de ancho o calzada/plataforma única > 7 y <= 12 m de ancho, con afectación por servicios o elementos de mobiliario urbano, en actuaciones de 0.2 a 2 m3, con dúmper de gasoil</code> | <code>passage: Base de concreto (CE, EHE) concreto HM-20/S / 40 / I de consistencia seca, con un tamaño máximo de agregado de 40 mm, conteniendo >= 200 kg/m3 de cemento, adecuado para clase de exposición I, vertido mediante transporte mecánico interno, extendido y vibrado manual, con acabado nivelado, en un entorno urbano con movilidad accesible, en aceras de más de 3 y hasta 5 m de ancho o calzada/plataforma única de más de 7 y hasta 12 m de ancho, afectado por servicios o elementos de mobiliario urbano, en trabajos de 0.2 a 2 m3, utilizando un camión de gasóleo.</code> | <code>passage: Base de hormigón (CE, EHE) hormigón HM-25/S / 50 / II de consistencia fluida, tamaño máximo del árido 30 mm, con >= 250 kg/m3 de cemento, apto para clase de exposición II, vertido con transporte interior manual con extendido y vibrado mecánico, con acabado rugoso, en entorno rural con dificultad de movilidad, en aceras > 2 y <= 4 m de ancho o calzada/plataforma única > 6 y <= 10 m de ancho, sin afectación por servicios o elementos de mobiliario urbano, en actuaciones de 1 a 3 m3, con camión de gasóleo.</code> |
| <code>query: Pavimento de losa de hormigón para pavimentos de 80x80 cm y 3,5 cm de espesor, de forma cuadrado, textura abujardada, precio alto, sobre lecho de arena de 3 cm de espesor, con relleno de juntas con arena fina y compactación del pavimento acabado, en entorno urbano sin dificultad de movilidad, en aceras <= 3 m de ancho o calzada/plataforma única <= 7 m de ancho, sin afectación por servicios o elementos de mobiliario urbano, en actuaciones de hasta 1 m2</code> | <code>passage: Losas de concreto de 80x80 cm y 3,5 cm de grosor, con acabado abujardado, instaladas sobre una base de arena de 3 cm, con juntas rellenadas con arena fina y compactación final, adecuadas para áreas urbanas con accesibilidad, en aceras de hasta 3 m de ancho o plataformas de hasta 7 m, sin interferencias de servicios públicos o mobiliario urbano, en proyectos de hasta 1 m2.</code> | <code>passage: Pavimento de losa de cerámica para pavimentos de 60x60 cm y 2 cm de espesor, de forma rectangular, textura lisa, precio moderado, sobre base de grava de 2 cm de espesor, con relleno de juntas con mortero y nivelación del pavimento terminado, en entorno rural con dificultad de acceso, en caminos <= 2 m de ancho o senderos/plataformas múltiples <= 5 m de ancho, con afectación por servicios o elementos de jardinería, en actuaciones de hasta 2 m2.</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 3
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 529 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
* Approximate statistics based on the first 529 samples:
| | anchor | positive | negative |
|:--------|:------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 28 tokens</li><li>mean: 109.1 tokens</li><li>max: 320 tokens</li></ul> | <ul><li>min: 29 tokens</li><li>mean: 105.07 tokens</li><li>max: 284 tokens</li></ul> | <ul><li>min: 28 tokens</li><li>mean: 98.9 tokens</li><li>max: 303 tokens</li></ul> |
* Samples:
| anchor | positive | negative |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>query: Canal de hormigón polímero sin pendiente, de ancho interior 100 mm y de 160 a 200 mm de altura, con perfil lateral, con rejilla de acero inoxidable nervada clase A15, según norma UNE-EN 1433, fijada con cancela al canal, colocado sobre base de hormigón con solera de 100 mm de espesor y paredes de 100 mm de espesor</code> | <code>passage: Canal de polímero de hormigón plano, con un ancho interno de 100 mm y una altura que varía entre 160 y 200 mm, equipado con un perfil lateral y una rejilla de acero inoxidable clase A15, conforme a la norma UNE-EN 1433, anclada al canal y asentada sobre una base de hormigón con una solera de 100 mm de grosor y paredes de 100 mm de grosor.</code> | <code>passage: Canal de hormigón convencional con pendiente, de ancho interior 150 mm y de 200 a 250 mm de altura, sin perfil lateral, con rejilla de plástico clase B125, según norma UNE-EN 1433, fijada sin cancela al canal, colocado sobre base de asfalto con solera de 150 mm de espesor y paredes de 150 mm de espesor.</code> |
| <code>query: Forjado nervado reticular de 35+5 cm, de casetones mortero de cemento con una cuantía de 0,61/m2 de forjado, interejes 0,8 m, con una cuantía de 24 kg/m2 de armadura AP500 S de acero en barras corrugadas, armadura AP500 T en mallas electrosoldadas de 15x15 cm, 5 y 5 mm de diámetro y 0,187 1/m2 de hormigón para armar HA - 35 / F / 20 / XC1 con una cantidad de cemento de 300 kg/m3 i relación agua cemento =< 0.6 vertido con bomba</code> | <code>passage: Forjado reticulado nervado de 35+5 cm, compuesto por casetones de mortero de cemento, con una densidad de 0,61/m2, separaciones de 0,8 m, y una cantidad de 24 kg/m2 de armadura AP500 S en varillas corrugadas, además de armadura AP500 T en mallas electrosoldadas de 15x15 cm, 5 y 5 mm de diámetro, y 0,187 1/m2 de hormigón HA - 35 / F / 20 / XC1, utilizando 300 kg/m3 de cemento y una relación agua-cemento menor o igual a 0.6, aplicado mediante bomba.</code> | <code>passage: Forjado plano de 30+10 cm, de casetones de poliestireno expandido con una cuantía de 0,75/m2 de forjado, interejes 1,0 m, con una cuantía de 30 kg/m2 de armadura B500S de acero en mallas electrosoldadas de 20x20 cm, 6 y 6 mm de diámetro y 0,150 1/m2 de hormigón para armar HA - 25 / F / 30 / XC2 con una cantidad de cemento de 350 kg/m3 y relación agua cemento => 0.5 vertido manualmente.</code> |
| <code>query: Ventana de aluminio lacado blanco, colocada sobre premarco, con tres hojas correderas sobre dos raíles, para un hueco de obra aproximado de 210x150 cm, elaborada con perfiles de precio alto, clasificación mínima 3 de permeabilidad al aire según UNE-EN 12207, clasificación mínima 7A de estanqueidad al agua según UNE-EN 12208 y clasificación mínima C3 de resistencia al viento según UNE-EN 12210, sin persiana</code> | <code>passage: Ventana de PVC blanco, instalada en un premarco, con tres paneles deslizantes sobre dos rieles, diseñada para un espacio de obra de aproximadamente 210x150 cm, fabricada con perfiles de alta calidad, con una clasificación mínima de 3 en permeabilidad al aire según UNE-EN 12207, clasificación mínima 7A en estanqueidad al agua según UNE-EN 12208 y clasificación mínima C3 en resistencia al viento según UNE-EN 12210, sin sistema de persiana.</code> | <code>passage: Ventana de PVC sin lacar, instalada en un marco fijo, con dos hojas abatibles sobre un solo raíl, para un hueco de obra aproximado de 200x140 cm, elaborada con perfiles de precio bajo, clasificación mínima 1 de permeabilidad al aire según UNE-EN 12207, clasificación mínima 5A de estanqueidad al agua según UNE-EN 12208 y clasificación mínima B2 de resistencia al viento según UNE-EN 12210, con persiana incorporada.</code> |
* Loss: [<code>TripletLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#tripletloss) with these parameters:
```json
{
"distance_metric": "TripletDistanceMetric.EUCLIDEAN",
"triplet_margin": 3
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 6
- `per_device_eval_batch_size`: 6
- `learning_rate`: 2e-05
- `num_train_epochs`: 10
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 6
- `per_device_eval_batch_size`: 6
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | validation-set_max_accuracy |
|:----------:|:--------:|:-------------:|:---------------:|:---------------------------:|
| 0.2833 | 200 | 2.8113 | 2.3674 | 0.9962 |
| 0.5666 | 400 | 1.6305 | 0.7209 | 0.9887 |
| 0.8499 | 600 | 0.7763 | 0.6523 | 0.9792 |
| 1.1331 | 800 | 0.7287 | 0.5315 | 0.9849 |
| 1.4164 | 1000 | 0.5824 | 0.4461 | 0.9849 |
| 1.6997 | 1200 | 0.508 | 0.4173 | 0.9905 |
| 1.9830 | 1400 | 0.4784 | 0.3315 | 0.9905 |
| 2.2663 | 1600 | 0.2979 | 0.2590 | 0.9868 |
| 2.5496 | 1800 | 0.2218 | 0.2567 | 0.9868 |
| 2.8329 | 2000 | 0.2886 | 0.1700 | 0.9887 |
| 3.1161 | 2200 | 0.2331 | 0.1453 | 0.9887 |
| 3.3994 | 2400 | 0.1352 | 0.1226 | 0.9962 |
| 3.6827 | 2600 | 0.16 | 0.1649 | 0.9887 |
| 3.9660 | 2800 | 0.1549 | 0.1291 | 0.9962 |
| 4.2493 | 3000 | 0.088 | 0.1059 | 0.9962 |
| 4.5326 | 3200 | 0.0908 | 0.0973 | 0.9962 |
| 4.8159 | 3400 | 0.0784 | 0.0907 | 0.9962 |
| 5.0992 | 3600 | 0.0858 | 0.1177 | 0.9962 |
| 5.3824 | 3800 | 0.0559 | 0.0898 | 0.9962 |
| 5.6657 | 4000 | 0.0558 | 0.0715 | 0.9962 |
| 5.9490 | 4200 | 0.038 | 0.0621 | 0.9905 |
| 6.2323 | 4400 | 0.0322 | 0.0639 | 0.9981 |
| 6.5156 | 4600 | 0.0189 | 0.0804 | 0.9943 |
| 6.7989 | 4800 | 0.0322 | 0.0572 | 0.9887 |
| 7.0822 | 5000 | 0.0234 | 0.0468 | 0.9962 |
| **7.3654** | **5200** | **0.0109** | **0.0393** | **0.9962** |
| 7.6487 | 5400 | 0.0089 | 0.0423 | 0.9962 |
| 7.9320 | 5600 | 0.0109 | 0.0452 | 0.9962 |
| 8.2153 | 5800 | 0.0142 | 0.0453 | 0.9962 |
| 8.4986 | 6000 | 0.0087 | 0.0482 | 0.9943 |
| 8.7819 | 6200 | 0.0016 | 0.0482 | 0.9943 |
| 9.0652 | 6400 | 0.0022 | 0.0442 | 0.9943 |
| 9.3484 | 6600 | 0.0067 | 0.0449 | 0.9924 |
| 9.6317 | 6800 | 0.0008 | 0.0448 | 0.9943 |
| 9.9150 | 7000 | 0.0025 | 0.0455 | 0.9962 |
| 10.0 | 7060 | - | - | 0.9962 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.2.0
- Transformers: 4.44.2
- PyTorch: 2.4.1+cu121
- Accelerate: 0.34.2
- Datasets: 3.0.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### TripletLoss
```bibtex
@misc{hermans2017defense,
title={In Defense of the Triplet Loss for Person Re-Identification},
author={Alexander Hermans and Lucas Beyer and Bastian Leibe},
year={2017},
eprint={1703.07737},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
robotics-diffusion-transformer/rdt-1b
|
robotics-diffusion-transformer
| 2024-10-17T08:27:05Z | 2,435 | 72 |
transformers
|
[
"transformers",
"pytorch",
"robotics",
"multimodal",
"pretraining",
"vla",
"diffusion",
"rdt",
"en",
"arxiv:2410.07864",
"license:mit",
"endpoints_compatible",
"region:us"
] |
robotics
| 2024-08-27T05:32:41Z |
---
license: mit
language:
- en
pipeline_tag: robotics
library_name: transformers
tags:
- robotics
- pytorch
- multimodal
- pretraining
- vla
- diffusion
- rdt
---
# RDT-1B

RDT-1B is a 1B-parameter imitation learning Diffusion Transformer pre-trained on 1M+ multi-robot episodes. Given language instruction and RGB images of up to three views, RDT can predict the next
64 robot actions. RDT is compatible with almost all modern mobile manipulators, from single-arm to dual-arm, joint to EEF, position to velocity, and even wheeled locomotion.
All the [code](https://github.com/thu-ml/RoboticsDiffusionTransformer/tree/main?tab=readme-ov-file), pre-trained model weights, and [data](https://huggingface.co/datasets/robotics-diffusion-transformer/rdt-ft-data) are licensed under the MIT license.
Please refer to our [project page](https://rdt-robotics.github.io/rdt-robotics/) and [paper](https://arxiv.org/pdf/2410.07864) for more information.
## Model Details
- **Developed by:** The RDT team consisting of researchers from the [TSAIL group](https://ml.cs.tsinghua.edu.cn/) at Tsinghua University
- **Task Type:** Vision-Language-Action (language, image => robot actions)
- **Modle Type:** Diffusion Policy with Transformers
- **License:** MIT
- **Language(s) (NLP):** en
- **Multi-Modal Encoders:**
- **Vision Backbone:** [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384)
- **Language Model:** [t5-v1_1-xxl](https://huggingface.co/google/t5-v1_1-xxl)
- **Pre-Training Datasets:** 46 datasets consisting of [RT-1 Dataset](https://robotics-transformer1.github.io/), [RH20T](https://rh20t.github.io/), [DROID](https://droid-dataset.github.io/), [BridgeData V2](https://rail-berkeley.github.io/bridgedata/), [RoboSet](https://robopen.github.io/roboset/), and a subset of [Open X-Embodiment](https://robotics-transformer-x.github.io/). See [this link](https://github.com/thu-ml/RoboticsDiffusionTransformer/blob/main/docs/pretrain.md#download-and-prepare-datasets) for a detailed list.
- **Repository:** https://github.com/thu-ml/RoboticsDiffusionTransformer
- **Paper :** https://arxiv.org/pdf/2410.07864
- **Project Page:** https://rdt-robotics.github.io/rdt-robotics/
## Uses
RDT takes language instruction, RGB images (of up to three views), control frequency (if any), and proprioception as input and predicts the next 64 robot actions.
RDT supports control of almost all robot manipulators with the help of the unified action space, which
includes all the main physical quantities of the robot manipulator (e.g., the end-effector and joint, position and velocity, and the wheeled locomotion).
To deploy on your robot platform, you need to fill the relevant quantities of the raw action vector into the unified space vector. See [our repository](https://github.com/thu-ml/RoboticsDiffusionTransformer) for more information.
**Out-of-Scope**: Due to the embodiment gap, RDT cannot yet generalize to new robot platforms (not seen in the pre-training datasets).
In this case, we recommend collecting a small dataset of the target robot and then using it to fine-tune RDT.
See [our repository](https://github.com/thu-ml/RoboticsDiffusionTransformer) for a tutorial.
Here's an example of how to use the RDT-1B model for inference on a robot:
```python
# Please first clone the repository and install dependencies
# Then switch to the root directory of the repository by "cd RoboticsDiffusionTransformer"
# Import a create function from the code base
from scripts.agilex_model import create_model
# Names of cameras used for visual input
CAMERA_NAMES = ['cam_high', 'cam_right_wrist', 'cam_left_wrist']
config = {
'episode_len': 1000, # Max length of one episode
'state_dim': 14, # Dimension of the robot's state
'chunk_size': 64, # Number of actions to predict in one step
'camera_names': CAMERA_NAMES,
}
pretrained_vision_encoder_name_or_path = "google/siglip-so400m-patch14-384"
# Create the model with the specified configuration
model = create_model(
args=config,
dtype=torch.bfloat16,
pretrained_vision_encoder_name_or_path=pretrained_vision_encoder_name_or_path,
pretrained='robotics-diffusion-transformer/rdt-1b',
control_frequency=25,
)
# Start inference process
# Load the pre-computed language embeddings
# Refer to scripts/encode_lang.py for how to encode the language instruction
lang_embeddings_path = 'your/language/embedding/path'
text_embedding = torch.load(lang_embeddings_path)['embeddings']
images: List(PIL.Image) = ... # The images from last 2 frames
proprio = ... # The current robot state
# Perform inference to predict the next `chunk_size` actions
actions = policy.step(
proprio=proprio,
images=images,
text_embeds=text_embedding
)
```
<!-- RDT-1B supports finetuning on custom datasets, deploying and inferencing on real robots, and retraining the model.
Please refer to [our repository](https://github.com/GeneralEmbodiedSystem/RoboticsDiffusionTransformer/blob/main/docs/pretrain.md) for all the above guides. -->
## Citation
If you find our work helpful, please cite us:
```bibtex
@article{liu2024rdt,
title={RDT-1B: a Diffusion Foundation Model for Bimanual Manipulation},
author={Liu, Songming and Wu, Lingxuan and Li, Bangguo and Tan, Hengkai and Chen, Huayu and Wang, Zhengyi and Xu, Ke and Su, Hang and Zhu, Jun},
journal={arXiv preprint arXiv:2410.07864},
year={2024}
}
```
Thank you!
|
mradermacher/Gemma-2-Ataraxy-v4b-9B-GGUF
|
mradermacher
| 2024-10-17T08:16:11Z | 33 | 2 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:lemon07r/Gemma-2-Ataraxy-v4b-9B",
"base_model:quantized:lemon07r/Gemma-2-Ataraxy-v4b-9B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-16T21:33:38Z |
---
base_model: lemon07r/Gemma-2-Ataraxy-v4b-9B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v4b-9B
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF
|
mradermacher
| 2024-10-17T08:16:08Z | 17 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:lemon07r/Gemma-2-Ataraxy-v4b-9B",
"base_model:quantized:lemon07r/Gemma-2-Ataraxy-v4b-9B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-17T06:50:20Z |
---
base_model: lemon07r/Gemma-2-Ataraxy-v4b-9B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v4b-9B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.6 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-Q2_K.gguf) | i1-Q2_K | 3.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 5.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 5.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 5.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-Q4_0.gguf) | i1-Q4_0 | 5.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 5.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-v4b-9B-i1-GGUF/resolve/main/Gemma-2-Ataraxy-v4b-9B.i1-Q6_K.gguf) | i1-Q6_K | 7.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
enginia/tiny_fsdp_dbc_171024_1
|
enginia
| 2024-10-17T08:15:40Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T08:13:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Serione/opt-125m-9
|
Serione
| 2024-10-17T08:12:54Z | 156 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T08:12:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Serione/opt-125m-8
|
Serione
| 2024-10-17T08:09:11Z | 143 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T08:08:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
p1xelsr/no_wtm_1m_dedup1
|
p1xelsr
| 2024-10-17T08:07:26Z | 87 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T08:05:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Serione/opt-125m-7
|
Serione
| 2024-10-17T08:04:19Z | 157 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T08:03:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
avemio-digital/GRAG-BGE-M3-Pairs-Triples-Hessian-AI
|
avemio-digital
| 2024-10-17T08:03:05Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"dataset:avemio-digital/GRAG-Embedding-Triples-Hessian-AI",
"base_model:avemio-digital/GRAG-BGE-M3-Pairs-Hessian-AI",
"base_model:finetune:avemio-digital/GRAG-BGE-M3-Pairs-Hessian-AI",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2024-10-17T07:24:05Z |
---
library_name: sentence-transformers
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
base_model: avemio-digital/GRAG-BGE-M3-Pairs-Hessian-AI
widget:
- source_sentence: 'search_query: i love autotrain'
sentences:
- 'search_query: huggingface auto train'
- 'search_query: hugging face auto train'
- 'search_query: i love autotrain'
pipeline_tag: sentence-similarity
datasets:
- avemio-digital/GRAG-Embedding-Triples-Hessian-AI
---
# Model Trained Using AutoTrain
- Problem type: Sentence Transformers
## Validation Metrics
No validation metrics available
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the Hugging Face Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'search_query: autotrain',
'search_query: auto train',
'search_query: i love autotrain',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
```
|
mateiaassAI/teacher_sst2_redv2
|
mateiaassAI
| 2024-10-17T08:01:38Z | 78 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:mateiaassAI/teacher_sst2",
"base_model:finetune:mateiaassAI/teacher_sst2",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-16T17:07:54Z |
---
library_name: transformers
license: mit
base_model: mateiaassAI/teacher_sst2
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
- precision
- recall
model-index:
- name: teacher_sst2_redv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# teacher_sst2_redv2
This model is a fine-tuned version of [mateiaassAI/teacher_sst2](https://huggingface.co/mateiaassAI/teacher_sst2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2390
- F1: 0.6897
- Roc Auc: 0.7984
- Accuracy: 0.5893
- Precision: 0.7488
- Recall: 0.6476
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.7e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|:---------:|:------:|
| No log | 1.0 | 256 | 0.2659 | 0.5655 | 0.7202 | 0.4530 | 0.8249 | 0.4698 |
| 0.2931 | 2.0 | 512 | 0.2460 | 0.6641 | 0.7856 | 0.5635 | 0.7677 | 0.6101 |
| 0.2931 | 3.0 | 768 | 0.2398 | 0.6791 | 0.7926 | 0.5764 | 0.7468 | 0.6330 |
| 0.1701 | 4.0 | 1024 | 0.2390 | 0.6897 | 0.7984 | 0.5893 | 0.7488 | 0.6476 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
Serione/opt-125m-6
|
Serione
| 2024-10-17T07:59:49Z | 154 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T07:58:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
k-l-lambda/li-EAGLE-LLaMA3-Instruct-70B
|
k-l-lambda
| 2024-10-17T07:56:24Z | 5 | 0 | null |
[
"safetensors",
"ppeagle_vllm",
"license:apache-2.0",
"region:us"
] | null | 2024-10-17T07:44:05Z |
---
license: apache-2.0
---
|
Serione/opt-125m-4
|
Serione
| 2024-10-17T07:55:06Z | 143 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T07:54:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Serione/opt-125m-3
|
Serione
| 2024-10-17T07:50:51Z | 145 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T07:49:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Serione/opt-125m-1
|
Serione
| 2024-10-17T07:40:50Z | 152 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T07:39:58Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
salts-models/confidence-be-closed-front
|
salts-models
| 2024-10-17T07:39:53Z | 7 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-16T15:50:23Z |
---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: back of beige conf_be colostomy pouch on the forest floor.
output:
url: samples/1729093668706__000002000_0.jpg
- text: front of white conf_be colostomy pouch on an office desk
output:
url: samples/1729093705280__000002000_1.jpg
- text: a black conf_be colostomy pouch and a beige conf_be colostomy pouch in an
animated style
output:
url: samples/1729093741858__000002000_2.jpg
- text: Front of a black conf_be colostomy pouch displayed on a 1960s television
system
output:
url: samples/1729093778420__000002000_3.jpg
- text: An image of a white conf_be colostomy pouch on a billboard in Times Square
in New York City in the evening
output:
url: samples/1729093815010__000002000_4.jpg
base_model: black-forest-labs/FLUX.1-dev
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# confidence_be_closed_front
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
<Gallery />
## Trigger words
Trigger word is "conf_be colostomy pouch"
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](/salts-models/confidence-be-closed-front/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('salts-models/confidence-be-closed-front', weight_name='confidence_be_closed_front.safetensors')
image = pipeline('back of beige conf_be colostomy pouch on the forest floor.').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
outlookAi/7UOubHv80A
|
outlookAi
| 2024-10-17T07:38:26Z | 468 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-17T06:59:56Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: CRE
---
# 7Uoubhv80A
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `CRE` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('outlookAi/7UOubHv80A', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Triangle104/Flammades-Mistral-Nemo-12B-Q5_K_M-GGUF
|
Triangle104
| 2024-10-17T07:36:24Z | 5 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"dataset:flammenai/Date-DPO-NoAsterisks",
"dataset:jondurbin/truthy-dpo-v0.1",
"base_model:flammenai/Flammades-Mistral-Nemo-12B",
"base_model:quantized:flammenai/Flammades-Mistral-Nemo-12B",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-17T07:34:11Z |
---
base_model: flammenai/Flammades-Mistral-Nemo-12B
datasets:
- flammenai/Date-DPO-NoAsterisks
- jondurbin/truthy-dpo-v0.1
library_name: transformers
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
model-index:
- name: Flammades-Mistral-Nemo-12B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 38.42
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Flammades-Mistral-Nemo-12B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 32.39
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Flammades-Mistral-Nemo-12B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 6.19
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Flammades-Mistral-Nemo-12B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 7.16
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Flammades-Mistral-Nemo-12B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 20.31
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Flammades-Mistral-Nemo-12B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 29.57
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/Flammades-Mistral-Nemo-12B
name: Open LLM Leaderboard
---
# Triangle104/Flammades-Mistral-Nemo-12B-Q5_K_M-GGUF
This model was converted to GGUF format from [`flammenai/Flammades-Mistral-Nemo-12B`](https://huggingface.co/flammenai/Flammades-Mistral-Nemo-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/flammenai/Flammades-Mistral-Nemo-12B) for more details on the model.
---
Model details:
-
nbeerbower/Mistral-Nemo-Gutenberg-Doppel-12B-v2 finetuned on flammenai/Date-DPO-NoAsterisks and jondurbin/truthy-dpo-v0.1.
Method
ORPO tuned with 2x RTX 3090 for 3 epochs.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric Value
Avg. 22.34
IFEval (0-Shot) 38.42
BBH (3-Shot) 32.39
MATH Lvl 5 (4-Shot) 6.19
GPQA (0-shot) 7.16
MuSR (0-shot) 20.31
MMLU-PRO (5-shot) 29.57
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Flammades-Mistral-Nemo-12B-Q5_K_M-GGUF --hf-file flammades-mistral-nemo-12b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Flammades-Mistral-Nemo-12B-Q5_K_M-GGUF --hf-file flammades-mistral-nemo-12b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Flammades-Mistral-Nemo-12B-Q5_K_M-GGUF --hf-file flammades-mistral-nemo-12b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Flammades-Mistral-Nemo-12B-Q5_K_M-GGUF --hf-file flammades-mistral-nemo-12b-q5_k_m.gguf -c 2048
```
|
cstr/Ministral-8B-Instruct-2410-GGUF
|
cstr
| 2024-10-17T07:34:29Z | 29 | 1 |
llama.cpp
|
[
"llama.cpp",
"gguf",
"mistral",
"en",
"fr",
"de",
"es",
"it",
"pt",
"zh",
"ja",
"ru",
"ko",
"license:other",
"region:us"
] | null | 2024-10-16T16:09:44Z |
---
language:
- en
- fr
- de
- es
- it
- pt
- zh
- ja
- ru
- ko
license: other
license_name: mrl
inference: false
license_link: https://mistral.ai/licenses/MRL-0.1.md
library_name: llama.cpp
---
# GGUF quants
These are (early testing) q4_k_m GGUF quants of Mistral/Ministral-8B-Instruct-2410.
Made with llama.cpp b3634, slightly modified.
They are for research use e.g. in llama.cpp and wrappers (like ollama), as covered by mrl license, as below.
Note that until llama.cpp implements [sliding window](https://github.com/mistralai/mistral-inference/commit/6428ccf99e4fa6acdb0176d5c8d77b2878c75040?diff=unified#diff-451990ec6b235948f7e86fc9004de9e452a94fe5c5c55d384745d149fe2b290e), probably best use it with a context size <= 2k.
Original model card below.
# Mistral AI Research License
If You want to use a Mistral Model, a Derivative or an Output for any purpose that is not expressly authorized under this Agreement, You must request a license from Mistral AI, which Mistral AI may grant to You in Mistral AI's sole discretion. To discuss such a license, please contact Mistral AI via the website contact form: https://mistral.ai/contact/
## 1. Scope and acceptance
**1.1. Scope of the Agreement.** This Agreement applies to any use, modification, or Distribution of any Mistral Model by You, regardless of the source You obtained a copy of such Mistral Model.
**1.2. Acceptance.** By accessing, using, modifying, Distributing a Mistral Model, or by creating, using or distributing a Derivative of the Mistral Model, You agree to be bound by this Agreement.
**1.3. Acceptance on behalf of a third-party.** If You accept this Agreement on behalf of Your employer or another person or entity, You warrant and represent that You have the authority to act and accept this Agreement on their behalf. In such a case, the word "You" in this Agreement will refer to Your employer or such other person or entity.
## 2. License
**2.1. Grant of rights**. Subject to Section 3 below, Mistral AI hereby grants You a non-exclusive, royalty-free, worldwide, non-sublicensable, non-transferable, limited license to use, copy, modify, and Distribute under the conditions provided in Section 2.2 below, the Mistral Model and any Derivatives made by or for Mistral AI and to create Derivatives of the Mistral Model.
**2.2. Distribution of Mistral Model and Derivatives made by or for Mistral AI.** Subject to Section 3 below, You may Distribute copies of the Mistral Model and/or Derivatives made by or for Mistral AI, under the following conditions:
You must make available a copy of this Agreement to third-party recipients of the Mistral Models and/or Derivatives made by or for Mistral AI you Distribute, it being specified that any rights to use the Mistral Models and/or Derivatives made by or for Mistral AI shall be directly granted by Mistral AI to said third-party recipients pursuant to the Mistral AI Research License agreement executed between these parties;
You must retain in all copies of the Mistral Models the following attribution notice within a "Notice" text file distributed as part of such copies: "Licensed by Mistral AI under the Mistral AI Research License".
**2.3. Distribution of Derivatives made by or for You.** Subject to Section 3 below, You may Distribute any Derivatives made by or for You under additional or different terms and conditions, provided that:
In any event, the use and modification of Mistral Model and/or Derivatives made by or for Mistral AI shall remain governed by the terms and conditions of this Agreement;
You include in any such Derivatives made by or for You prominent notices stating that You modified the concerned Mistral Model; and
Any terms and conditions You impose on any third-party recipients relating to Derivatives made by or for You shall neither limit such third-party recipients' use of the Mistral Model or any Derivatives made by or for Mistral AI in accordance with the Mistral AI Research License nor conflict with any of its terms and conditions.
## 3. Limitations
**3.1. Misrepresentation.** You must not misrepresent or imply, through any means, that the Derivatives made by or for You and/or any modified version of the Mistral Model You Distribute under your name and responsibility is an official product of Mistral AI or has been endorsed, approved or validated by Mistral AI, unless You are authorized by Us to do so in writing.
**3.2. Usage Limitation.** You shall only use the Mistral Models, Derivatives (whether or not created by Mistral AI) and Outputs for Research Purposes.
## 4. Intellectual Property
**4.1. Trademarks.** No trademark licenses are granted under this Agreement, and in connection with the Mistral Models, You may not use any name or mark owned by or associated with Mistral AI or any of its affiliates, except (i) as required for reasonable and customary use in describing and Distributing the Mistral Models and Derivatives made by or for Mistral AI and (ii) for attribution purposes as required by this Agreement.
**4.2. Outputs.** We claim no ownership rights in and to the Outputs. You are solely responsible for the Outputs You generate and their subsequent uses in accordance with this Agreement. Any Outputs shall be subject to the restrictions set out in Section 3 of this Agreement.
**4.3. Derivatives.** By entering into this Agreement, You accept that any Derivatives that You may create or that may be created for You shall be subject to the restrictions set out in Section 3 of this Agreement.
## 5. Liability
**5.1. Limitation of liability.** In no event, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall Mistral AI be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this Agreement or out of the use or inability to use the Mistral Models and Derivatives (including but not limited to damages for loss of data, loss of goodwill, loss of expected profit or savings, work stoppage, computer failure or malfunction, or any damage caused by malware or security breaches), even if Mistral AI has been advised of the possibility of such damages.
**5.2. Indemnification.** You agree to indemnify and hold harmless Mistral AI from and against any claims, damages, or losses arising out of or related to Your use or Distribution of the Mistral Models and Derivatives.
## 6. Warranty
**6.1. Disclaimer.** Unless required by applicable law or prior agreed to by Mistral AI in writing, Mistral AI provides the Mistral Models and Derivatives on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. Mistral AI does not represent nor warrant that the Mistral Models and Derivatives will be error-free, meet Your or any third party's requirements, be secure or will allow You or any third party to achieve any kind of result or generate any kind of content. You are solely responsible for determining the appropriateness of using or Distributing the Mistral Models and Derivatives and assume any risks associated with Your exercise of rights under this Agreement.
## 7. Termination
**7.1. Term.** This Agreement is effective as of the date of your acceptance of this Agreement or access to the concerned Mistral Models or Derivatives and will continue until terminated in accordance with the following terms.
**7.2. Termination.** Mistral AI may terminate this Agreement at any time if You are in breach of this Agreement. Upon termination of this Agreement, You must cease to use all Mistral Models and Derivatives and shall permanently delete any copy thereof. The following provisions, in their relevant parts, will survive any termination or expiration of this Agreement, each for the duration necessary to achieve its own intended purpose (e.g. the liability provision will survive until the end of the applicable limitation period):Sections 5 (Liability), 6(Warranty), 7 (Termination) and 8 (General Provisions).
**7.3. Litigation.** If You initiate any legal action or proceedings against Us or any other entity (including a cross-claim or counterclaim in a lawsuit), alleging that the Model or a Derivative, or any part thereof, infringe upon intellectual property or other rights owned or licensable by You, then any licenses granted to You under this Agreement will immediately terminate as of the date such legal action or claim is filed or initiated.
## 8. General provisions
**8.1. Governing laws.** This Agreement will be governed by the laws of France, without regard to choice of law principles, and the UN Convention on Contracts for the International Sale of Goods does not apply to this Agreement.
**8.2. Competent jurisdiction.** The courts of Paris shall have exclusive jurisdiction of any dispute arising out of this Agreement.
**8.3. Severability.** If any provision of this Agreement is held to be invalid, illegal or unenforceable, the remaining provisions shall be unaffected thereby and remain valid as if such provision had not been set forth herein.
## 9. Definitions
"Agreement": means this Mistral AI Research License agreement governing the access, use, and Distribution of the Mistral Models, Derivatives and Outputs.
"Derivative": means any (i) modified version of the Mistral Model (including but not limited to any customized or fine-tuned version thereof), (ii) work based on the Mistral Model, or (iii) any other derivative work thereof.
"Distribution", "Distributing", "Distribute" or "Distributed": means supplying, providing or making available, by any means, a copy of the Mistral Models and/or the Derivatives as the case may be, subject to Section 3 of this Agreement.
"Mistral AI", "We" or "Us": means Mistral AI, a French société par actions simplifiée registered in the Paris commercial registry under the number 952 418 325, and having its registered seat at 15, rue des Halles, 75001 Paris.
"Mistral Model": means the foundational large language model(s), and its elements which include algorithms, software, instructed checkpoints, parameters, source code (inference code, evaluation code and, if applicable, fine-tuning code) and any other elements associated thereto made available by Mistral AI under this Agreement, including, if any, the technical documentation, manuals and instructions for the use and operation thereof.
"Research Purposes": means any use of a Mistral Model, Derivative, or Output that is solely for (a) personal, scientific or academic research, and (b) for non-profit and non-commercial purposes, and not directly or indirectly connected to any commercial activities or business operations. For illustration purposes, Research Purposes does not include (1) any usage of the Mistral Model, Derivative or Output by individuals or contractors employed in or engaged by companies in the context of (a) their daily tasks, or (b) any activity (including but not limited to any testing or proof-of-concept) that is intended to generate revenue, nor (2) any Distribution by a commercial entity of the Mistral Model, Derivative or Output whether in return for payment or free of charge, in any medium or form, including but not limited to through a hosted or managed service (e.g. SaaS, cloud instances, etc.), or behind a software layer.
"Outputs": means any content generated by the operation of the Mistral Models or the Derivatives from a prompt (i.e., text instructions) provided by users. For the avoidance of doubt, Outputs do not include any components of a Mistral Models, such as any fine-tuned versions of the Mistral Models, the weights, or parameters.
"You": means the individual or entity entering into this Agreement with Mistral AI.
# Model Card for Ministral-8B-Instruct-2410
We introduce two new state-of-the-art models for local intelligence, on-device computing, and at-the-edge use cases. We call them les Ministraux: Ministral 3B and Ministral 8B.
The Ministral-8B-Instruct-2410 Language Model is an instruct fine-tuned model significantly outperforming existing models of similar size, released under the Mistral Research License.
If you are interested in using Ministral-3B or Ministral-8B commercially, outperforming Mistral-7B, [reach out to us](https://mistral.ai/contact/).
For more details about les Ministraux please refer to our release [blog post](https://mistral.ai/news/ministraux).
## Ministral 8B Key features
- Released under the **Mistral Research License**, reach out to us for a commercial license
- Trained with a **128k context window** with **interleaved sliding-window attention**
- Trained on a large proportion of **multilingual and code data**
- Supports **function calling**
- Vocabulary size of **131k**, using the **V3-Tekken** tokenizer
### Basic Instruct Template (V3-Tekken)
```
<s>[INST]user message[/INST]assistant response</s>[INST]new user message[/INST]
```
*For more information about the tokenizer please refer to [mistral-common](https://github.com/mistralai/mistral-common)*
## Ministral 8B Architecture
| Feature | Value |
|:---------------------:|:--------------------:|
| **Architecture** | Dense Transformer |
| **Parameters** | 8,019,808,256 |
| **Layers** | 36 |
| **Heads** | 32 |
| **Dim** | 4096 |
| **KV Heads (GQA)** | 8 |
| **Hidden Dim** | 12288 |
| **Head Dim** | 128 |
| **Vocab Size** | 131,072 |
| **Context Length** | 128k |
| **Attention Pattern** | Ragged (128k,32k,32k,32k) |
## Benchmarks
#### Base Models
<u>Knowledge & Commonsense</u>
| Model | MMLU | AGIEval | Winogrande | Arc-c | TriviaQA |
|:-------------:|:------:|:---------:|:------------:|:-------:|:----------:|
| Mistral 7B Base | 62.5 | 42.5 | 74.2 | 67.9 | 62.5 |
| Llama 3.1 8B Base | 64.7 | 44.4 | 74.6 | 46.0 | 60.2 |
| ***Ministral 8B Base*** | ***<u>65.0</u>*** | ***<u>48.3</u>*** | ***<u>75.3</u>*** | ***<u>71.9</u>*** | ***<u>65.5</u>*** |
| | | | | | |
| Gemma 2 2B Base | 52.4 | 33.8 | 68.7 | 42.6 | 47.8 |
| Llama 3.2 3B Base | 56.2 | 37.4 | 59.6 | 43.1 | 50.7 |
| ***Ministral 3B Base*** | ***<u>60.9</u>*** | ***<u>42.1</u>*** | ***<u>72.7</u>*** | ***<u>64.2</u>*** | ***<u>56.7</u>*** |
<u>Code & Math</u>
| Model | HumanEval pass@1 |GSM8K maj@8 |
|:-------------:|:-------------------:|:---------------:|
| Mistral 7B Base | 26.8 | 32.0 |
| Llama 3.1 8B Base | ***<u>37.8</u>*** | 42.2 |
| ***Ministral 8B Base*** | 34.8 | ***<u>64.5</u>*** |
| | | |
| Gemma 2 2B | 20.1 | 35.5 |
| Llama 3.2 3B | 14.6 | 33.5 |
| ***Ministral 3B*** | ***<u>34.2</u>*** | ***<u>50.9</u>*** |
<u>Multilingual</u>
| Model | French MMLU | German MMLU | Spanish MMLU |
|:-------------:|:-------------:|:-------------:|:-------------:|
| Mistral 7B Base | 50.6 | 49.6 | 51.4 |
| Llama 3.1 8B Base | 50.8 | 52.8 | 54.6 |
| ***Ministral 8B Base*** | ***<u>57.5</u>*** | ***<u>57.4</u>*** | ***<u>59.6</u>*** |
| | | | |
| Gemma 2 2B Base | 41.0 | 40.1 | 41.7 |
| Llama 3.2 3B Base | 42.3 | 42.2 | 43.1 |
| ***Ministral 3B Base*** | ***<u>49.1</u>*** | ***<u>48.3</u>*** | ***<u>49.5</u>*** |
### Instruct Models
<u>Chat/Arena (gpt-4o judge)</u>
| Model | MTBench | Arena Hard | Wild bench |
|:-------------:|:---------:|:------------:|:------------:|
| Mistral 7B Instruct v0.3 | 6.7 | 44.3 | 33.1 |
| Llama 3.1 8B Instruct | 7.5 | 62.4 | 37.0 |
| Gemma 2 9B Instruct | 7.6 | 68.7 | ***<u>43.8</u>*** |
| ***Ministral 8B Instruct*** | ***<u>8.3</u>*** | ***<u>70.9</u>*** | 41.3 |
| | | | |
| Gemma 2 2B Instruct | 7.5 | 51.7 | 32.5 |
| Llama 3.2 3B Instruct | 7.2 | 46.0 | 27.2 |
| ***Ministral 3B Instruct*** | ***<u>8.1</u>*** | ***<u>64.3</u>*** | ***<u>36.3</u>*** |
<u>Code & Math</u>
| Model | MBPP pass@1 | HumanEval pass@1 | Math maj@1 |
|:-------------:|:-------------:|:------------------:|:-------------:|
| Mistral 7B Instruct v0.3 | 50.2 | 38.4 | 13.2 |
| Gemma 2 9B Instruct | 68.5 | 67.7 | 47.4 |
Llama 3.1 8B Instruct | 69.7 | 67.1 | 49.3 |
| ***Ministral 8B Instruct*** | ***<u>70.0</u>*** | ***<u>76.8</u>*** | ***<u>54.5</u>*** |
| | | | |
| Gemma 2 2B Instruct | 54.5 | 42.7 | 22.8 |
| Llama 3.2 3B Instruct | 64.6 | 61.0 | 38.4 |
| ***Ministral 3B* Instruct** | ***<u>67.7</u>*** | ***<u>77.4</u>*** | ***<u>51.7</u>*** |
<u>Function calling</u>
| Model | Internal bench |
|:-------------:|:-----------------:|
| Mistral 7B Instruct v0.3 | 6.9 |
| Llama 3.1 8B Instruct | N/A |
| Gemma 2 9B Instruct | N/A |
| ***Ministral 8B Instruct*** | ***<u>31.6</u>*** |
| | |
| Gemma 2 2B Instruct | N/A |
| Llama 3.2 3B Instruct | N/A |
| ***Ministral 3B Instruct*** | ***<u>28.4</u>*** |
## Usage Examples
### vLLM (recommended)
We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm)
to implement production-ready inference pipelines.
> [!IMPORTANT]
> Currently vLLM is capped at 32k context size because interleaved attention kernels for paged attention are not yet implemented in vLLM.
> Attention kernels for paged attention are being worked on and as soon as it is fully supported in vLLM, this model card will be updated.
> To take advantage of the full 128k context size we recommend [Mistral Inference](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410#mistral-inference)
**_Installation_**
Make sure you install `vLLM >= v0.6.2`:
```
pip install --upgrade vllm
```
Also make sure you have `mistral_common >= 1.4.4` installed:
```
pip install --upgrade mistral_common
```
You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile).
**_Offline_**
```py
from vllm import LLM
from vllm.sampling_params import SamplingParams
model_name = "mistralai/Ministral-8B-Instruct-2410"
sampling_params = SamplingParams(max_tokens=8192)
# note that running Ministral 8B on a single GPU requires 24 GB of GPU RAM
# If you want to divide the GPU requirement over multiple devices, please add *e.g.* `tensor_parallel=2`
llm = LLM(model=model_name, tokenizer_mode="mistral", config_format="mistral", load_format="mistral")
prompt = "Do we need to think for 10 seconds to find the answer of 1 + 1?"
messages = [
{
"role": "user",
"content": prompt
},
]
outputs = llm.chat(messages, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
# You don't need to think for 10 seconds to find the answer to 1 + 1. The answer is 2,
# and you can easily add these two numbers in your mind very quickly without any delay.
```
**_Server_**
You can also use Ministral-8B in a server/client setting.
1. Spin up a server:
```
vllm serve mistralai/Ministral-8B-Instruct-2410 --tokenizer_mode mistral --config_format mistral --load_format mistral
```
**Note:** Running Ministral-8B on a single GPU requires 24 GB of GPU RAM.
If you want to divide the GPU requirement over multiple devices, please add *e.g.* `--tensor_parallel=2`
2. And ping the client:
```
curl --location 'http://<your-node-url>:8000/v1/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer token' \
--data '{
"model": "mistralai/Ministral-8B-Instruct-2410",
"messages": [
{
"role": "user",
"content": "Do we need to think for 10 seconds to find the answer of 1 + 1?"
}
]
}'
```
### Mistral-inference
We recommend using [mistral-inference](https://github.com/mistralai/mistral-inference) to quickly try out / "vibe-check" the model.
**_Install_**
Make sure to have `mistral_inference >= 1.5.0` installed.
```
pip install mistral_inference --upgrade
```
**_Download_**
```py
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', '8B-Instruct')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Ministral-8B-Instruct-2410", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)
```
### Chat
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
```
mistral-chat $HOME/mistral_models/8B-Instruct --instruct --max_tokens 256
```
### Passkey detection
> [!IMPORTANT]
> In this example the passkey message has over >100k tokens and mistral-inference
> does not have a chunked pre-fill mechanism. Therefore you will need a lot of
> GPU memory in order to run the below example (80 GB). For a more memory-efficient
> solution we recommend using vLLM.
```py
from mistral_inference.transformer import Transformer
from pathlib import Path
import json
from mistral_inference.generate import generate
from huggingface_hub import hf_hub_download
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
def load_passkey_request() -> ChatCompletionRequest:
passkey_file = hf_hub_download(repo_id="mistralai/Ministral-8B-Instruct-2410", filename="passkey_example.json")
with open(passkey_file, "r") as f:
data = json.load(f)
message_content = data["messages"][0]["content"]
return ChatCompletionRequest(messages=[UserMessage(content=message_content)])
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
model = Transformer.from_folder(mistral_models_path, softmax_fp32=False)
completion_request = load_passkey_request()
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result) # The pass key is 13005.
```
### Instruct following
```py
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(messages=[UserMessage(content="How often does the letter r occur in Mistral?")])
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
### Function calling
```py
from mistral_common.protocol.instruct.tool_calls import Function, Tool
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
from mistral_common.tokens.tokenizers.tekken import SpecialTokenPolicy
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
tekken = tokenizer.instruct_tokenizer.tokenizer
tekken.special_token_policy = SpecialTokenPolicy.IGNORE
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(
tools=[
Tool(
function=Function(
name="get_current_weather",
description="Get the current weather",
parameters={
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
},
"required": ["location", "format"],
},
)
)
],
messages=[
UserMessage(content="What's the weather like today in Paris?"),
],
)
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
## The Mistral AI Team
Albert Jiang, Alexandre Abou Chahine, Alexandre Sablayrolles, Alexis Tacnet, Alodie Boissonnet, Alok Kothari, Amélie Héliou, Andy Lo, Anna Peronnin, Antoine Meunier, Antoine Roux, Antonin Faure, Aritra Paul, Arthur Darcet, Arthur Mensch, Audrey Herblin-Stoop, Augustin Garreau, Austin Birky, Avinash Sooriyarachchi, Baptiste Rozière, Barry Conklin, Bastien Bouillon, Blanche Savary de Beauregard, Carole Rambaud, Caroline Feldman, Charles de Freminville, Charline Mauro, Chih-Kuan Yeh, Chris Bamford, Clement Auguy, Corentin Heintz, Cyriaque Dubois, Devendra Singh Chaplot, Diego Las Casas, Diogo Costa, Eléonore Arcelin, Emma Bou Hanna, Etienne Metzger, Fanny Olivier Autran, Francois Lesage, Garance Gourdel, Gaspard Blanchet, Gaspard Donada Vidal, Gianna Maria Lengyel, Guillaume Bour, Guillaume Lample, Gustave Denis, Harizo Rajaona, Himanshu Jaju, Ian Mack, Ian Mathew, Jean-Malo Delignon, Jeremy Facchetti, Jessica Chudnovsky, Joachim Studnia, Justus Murke, Kartik Khandelwal, Kenneth Chiu, Kevin Riera, Leonard Blier, Leonard Suslian, Leonardo Deschaseaux, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Sophia Yang, Margaret Jennings, Marie Pellat, Marie Torelli, Marjorie Janiewicz, Mathis Felardos, Maxime Darrin, Michael Hoff, Mickaël Seznec, Misha Jessel Kenyon, Nayef Derwiche, Nicolas Carmont Zaragoza, Nicolas Faurie, Nicolas Moreau, Nicolas Schuhl, Nikhil Raghuraman, Niklas Muhs, Olivier de Garrigues, Patricia Rozé, Patricia Wang, Patrick von Platen, Paul Jacob, Pauline Buche, Pavankumar Reddy Muddireddy, Perry Savas, Pierre Stock, Pravesh Agrawal, Renaud de Peretti, Romain Sauvestre, Romain Sinthe, Roman Soletskyi, Sagar Vaze, Sandeep Subramanian, Saurabh Garg, Soham Ghosh, Sylvain Regnier, Szymon Antoniak, Teven Le Scao, Theophile Gervet, Thibault Schueller, Thibaut Lavril, Thomas Wang, Timothée Lacroix, Valeriia Nemychnikova, Wendy Shang, William El Sayed, William Marshall
# Model Card for Ministral-8B-Instruct-2410
We introduce two new state-of-the-art models for local intelligence, on-device computing, and at-the-edge use cases. We call them les Ministraux: Ministral 3B and Ministral 8B.
The Ministral-8B-Instruct-2410 Language Model is an instruct fine-tuned model significantly outperforming existing models of similar size, released under the Mistral Research License.
If you are interested in using Ministral-3B or Ministral-8B commercially, outperforming Mistral-7B, [reach out to us](https://mistral.ai/contact/).
For more details about les Ministraux please refer to our release [blog post](https://mistral.ai/news/ministraux).
## Ministral 8B Key features
- Released under the **Mistral Research License**, reach out to us for a commercial license
- Trained with a **128k context window** with **interleaved sliding-window attention**
- Trained on a large proportion of **multilingual and code data**
- Supports **function calling**
- Vocabulary size of **131k**, using the **V3-Tekken** tokenizer
### Basic Instruct Template (V3-Tekken)
```
<s>[INST]user message[/INST]assistant response</s>[INST]new user message[/INST]
```
*For more information about the tokenizer please refer to [mistral-common](https://github.com/mistralai/mistral-common)*
## Ministral 8B Architecture
| Feature | Value |
|:---------------------:|:--------------------:|
| **Architecture** | Dense Transformer |
| **Parameters** | 8,019,808,256 |
| **Layers** | 36 |
| **Heads** | 32 |
| **Dim** | 4096 |
| **KV Heads (GQA)** | 8 |
| **Hidden Dim** | 12288 |
| **Head Dim** | 128 |
| **Vocab Size** | 131,072 |
| **Context Length** | 128k |
| **Attention Pattern** | Ragged (128k,32k,32k,32k) |
## Benchmarks
#### Base Models
<u>Knowledge & Commonsense</u>
| Model | MMLU | AGIEval | Winogrande | Arc-c | TriviaQA |
|:-------------:|:------:|:---------:|:------------:|:-------:|:----------:|
| Mistral 7B Base | 62.5 | 42.5 | 74.2 | 67.9 | 62.5 |
| Llama 3.1 8B Base | 64.7 | 44.4 | 74.6 | 46.0 | 60.2 |
| ***Ministral 8B Base*** | ***<u>65.0</u>*** | ***<u>48.3</u>*** | ***<u>75.3</u>*** | ***<u>71.9</u>*** | ***<u>65.5</u>*** |
| | | | | | |
| Gemma 2 2B Base | 52.4 | 33.8 | 68.7 | 42.6 | 47.8 |
| Llama 3.2 3B Base | 56.2 | 37.4 | 59.6 | 43.1 | 50.7 |
| ***Ministral 3B Base*** | ***<u>60.9</u>*** | ***<u>42.1</u>*** | ***<u>72.7</u>*** | ***<u>64.2</u>*** | ***<u>56.7</u>*** |
<u>Code & Math</u>
| Model | HumanEval pass@1 |GSM8K maj@8 |
|:-------------:|:-------------------:|:---------------:|
| Mistral 7B Base | 26.8 | 32.0 |
| Llama 3.1 8B Base | ***<u>37.8</u>*** | 42.2 |
| ***Ministral 8B Base*** | 34.8 | ***<u>64.5</u>*** |
| | | |
| Gemma 2 2B | 20.1 | 35.5 |
| Llama 3.2 3B | 14.6 | 33.5 |
| ***Ministral 3B*** | ***<u>34.2</u>*** | ***<u>50.9</u>*** |
<u>Multilingual</u>
| Model | French MMLU | German MMLU | Spanish MMLU |
|:-------------:|:-------------:|:-------------:|:-------------:|
| Mistral 7B Base | 50.6 | 49.6 | 51.4 |
| Llama 3.1 8B Base | 50.8 | 52.8 | 54.6 |
| ***Ministral 8B Base*** | ***<u>57.5</u>*** | ***<u>57.4</u>*** | ***<u>59.6</u>*** |
| | | | |
| Gemma 2 2B Base | 41.0 | 40.1 | 41.7 |
| Llama 3.2 3B Base | 42.3 | 42.2 | 43.1 |
| ***Ministral 3B Base*** | ***<u>49.1</u>*** | ***<u>48.3</u>*** | ***<u>49.5</u>*** |
### Instruct Models
<u>Chat/Arena (gpt-4o judge)</u>
| Model | MTBench | Arena Hard | Wild bench |
|:-------------:|:---------:|:------------:|:------------:|
| Mistral 7B Instruct v0.3 | 6.7 | 44.3 | 33.1 |
| Llama 3.1 8B Instruct | 7.5 | 62.4 | 37.0 |
| Gemma 2 9B Instruct | 7.6 | 68.7 | ***<u>43.8</u>*** |
| ***Ministral 8B Instruct*** | ***<u>8.3</u>*** | ***<u>70.9</u>*** | 41.3 |
| | | | |
| Gemma 2 2B Instruct | 7.5 | 51.7 | 32.5 |
| Llama 3.2 3B Instruct | 7.2 | 46.0 | 27.2 |
| ***Ministral 3B Instruct*** | ***<u>8.1</u>*** | ***<u>64.3</u>*** | ***<u>36.3</u>*** |
<u>Code & Math</u>
| Model | MBPP pass@1 | HumanEval pass@1 | Math maj@1 |
|:-------------:|:-------------:|:------------------:|:-------------:|
| Mistral 7B Instruct v0.3 | 50.2 | 38.4 | 13.2 |
| Gemma 2 9B Instruct | 68.5 | 67.7 | 47.4 |
Llama 3.1 8B Instruct | 69.7 | 67.1 | 49.3 |
| ***Ministral 8B Instruct*** | ***<u>70.0</u>*** | ***<u>76.8</u>*** | ***<u>54.5</u>*** |
| | | | |
| Gemma 2 2B Instruct | 54.5 | 42.7 | 22.8 |
| Llama 3.2 3B Instruct | 64.6 | 61.0 | 38.4 |
| ***Ministral 3B* Instruct** | ***<u>67.7</u>*** | ***<u>77.4</u>*** | ***<u>51.7</u>*** |
<u>Function calling</u>
| Model | Internal bench |
|:-------------:|:-----------------:|
| Mistral 7B Instruct v0.3 | 6.9 |
| Llama 3.1 8B Instruct | N/A |
| Gemma 2 9B Instruct | N/A |
| ***Ministral 8B Instruct*** | ***<u>31.6</u>*** |
| | |
| Gemma 2 2B Instruct | N/A |
| Llama 3.2 3B Instruct | N/A |
| ***Ministral 3B Instruct*** | ***<u>28.4</u>*** |
## Usage Examples
### vLLM (recommended)
We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm)
to implement production-ready inference pipelines.
> [!IMPORTANT]
> Currently vLLM is capped at 32k context size because interleaved attention kernels for paged attention are not yet implemented in vLLM.
> Attention kernels for paged attention are being worked on and as soon as it is fully supported in vLLM, this model card will be updated.
> To take advantage of the full 128k context size we recommend [Mistral Inference](https://huggingface.co/mistralai/Ministral-8B-Instruct-2410#mistral-inference)
**_Installation_**
Make sure you install `vLLM >= v0.6.2`:
```
pip install --upgrade vllm
```
Also make sure you have `mistral_common >= 1.4.4` installed:
```
pip install --upgrade mistral_common
```
You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile).
**_Offline_**
```py
from vllm import LLM
from vllm.sampling_params import SamplingParams
model_name = "mistralai/Ministral-8B-Instruct-2410"
sampling_params = SamplingParams(max_tokens=8192)
# note that running Ministral 8B on a single GPU requires 24 GB of GPU RAM
# If you want to divide the GPU requirement over multiple devices, please add *e.g.* `tensor_parallel=2`
llm = LLM(model=model_name, tokenizer_mode="mistral", config_format="mistral", load_format="mistral")
prompt = "Do we need to think for 10 seconds to find the answer of 1 + 1?"
messages = [
{
"role": "user",
"content": prompt
},
]
outputs = llm.chat(messages, sampling_params=sampling_params)
print(outputs[0].outputs[0].text)
# You don't need to think for 10 seconds to find the answer to 1 + 1. The answer is 2,
# and you can easily add these two numbers in your mind very quickly without any delay.
```
**_Server_**
You can also use Ministral-8B in a server/client setting.
1. Spin up a server:
```
vllm serve mistralai/Ministral-8B-Instruct-2410 --tokenizer_mode mistral --config_format mistral --load_format mistral
```
**Note:** Running Ministral-8B on a single GPU requires 24 GB of GPU RAM.
If you want to divide the GPU requirement over multiple devices, please add *e.g.* `--tensor_parallel=2`
2. And ping the client:
```
curl --location 'http://<your-node-url>:8000/v1/chat/completions' \
--header 'Content-Type: application/json' \
--header 'Authorization: Bearer token' \
--data '{
"model": "mistralai/Ministral-8B-Instruct-2410",
"messages": [
{
"role": "user",
"content": "Do we need to think for 10 seconds to find the answer of 1 + 1?"
}
]
}'
```
### Mistral-inference
We recommend using [mistral-inference](https://github.com/mistralai/mistral-inference) to quickly try out / "vibe-check" the model.
**_Install_**
Make sure to have `mistral_inference >= 1.5.0` installed.
```
pip install mistral_inference --upgrade
```
**_Download_**
```py
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', '8B-Instruct')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Ministral-8B-Instruct-2410", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)
```
### Chat
After installing `mistral_inference`, a `mistral-chat` CLI command should be available in your environment. You can chat with the model using
```
mistral-chat $HOME/mistral_models/8B-Instruct --instruct --max_tokens 256
```
### Passkey detection
> [!IMPORTANT]
> In this example the passkey message has over >100k tokens and mistral-inference
> does not have a chunked pre-fill mechanism. Therefore you will need a lot of
> GPU memory in order to run the below example (80 GB). For a more memory-efficient
> solution we recommend using vLLM.
```py
from mistral_inference.transformer import Transformer
from pathlib import Path
import json
from mistral_inference.generate import generate
from huggingface_hub import hf_hub_download
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
def load_passkey_request() -> ChatCompletionRequest:
passkey_file = hf_hub_download(repo_id="mistralai/Ministral-8B-Instruct-2410", filename="passkey_example.json")
with open(passkey_file, "r") as f:
data = json.load(f)
message_content = data["messages"][0]["content"]
return ChatCompletionRequest(messages=[UserMessage(content=message_content)])
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
model = Transformer.from_folder(mistral_models_path, softmax_fp32=False)
completion_request = load_passkey_request()
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result) # The pass key is 13005.
```
### Instruct following
```py
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(messages=[UserMessage(content="How often does the letter r occur in Mistral?")])
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
### Function calling
```py
from mistral_common.protocol.instruct.tool_calls import Function, Tool
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
from mistral_common.tokens.tokenizers.tekken import SpecialTokenPolicy
tokenizer = MistralTokenizer.from_file(f"{mistral_models_path}/tekken.json")
tekken = tokenizer.instruct_tokenizer.tokenizer
tekken.special_token_policy = SpecialTokenPolicy.IGNORE
model = Transformer.from_folder(mistral_models_path)
completion_request = ChatCompletionRequest(
tools=[
Tool(
function=Function(
name="get_current_weather",
description="Get the current weather",
parameters={
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
},
"required": ["location", "format"],
},
)
)
],
messages=[
UserMessage(content="What's the weather like today in Paris?"),
],
)
tokens = tokenizer.encode_chat_completion(completion_request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0])
print(result)
```
## The Mistral AI Team
Albert Jiang, Alexandre Abou Chahine, Alexandre Sablayrolles, Alexis Tacnet, Alodie Boissonnet, Alok Kothari, Amélie Héliou, Andy Lo, Anna Peronnin, Antoine Meunier, Antoine Roux, Antonin Faure, Aritra Paul, Arthur Darcet, Arthur Mensch, Audrey Herblin-Stoop, Augustin Garreau, Austin Birky, Avinash Sooriyarachchi, Baptiste Rozière, Barry Conklin, Bastien Bouillon, Blanche Savary de Beauregard, Carole Rambaud, Caroline Feldman, Charles de Freminville, Charline Mauro, Chih-Kuan Yeh, Chris Bamford, Clement Auguy, Corentin Heintz, Cyriaque Dubois, Devendra Singh Chaplot, Diego Las Casas, Diogo Costa, Eléonore Arcelin, Emma Bou Hanna, Etienne Metzger, Fanny Olivier Autran, Francois Lesage, Garance Gourdel, Gaspard Blanchet, Gaspard Donada Vidal, Gianna Maria Lengyel, Guillaume Bour, Guillaume Lample, Gustave Denis, Harizo Rajaona, Himanshu Jaju, Ian Mack, Ian Mathew, Jean-Malo Delignon, Jeremy Facchetti, Jessica Chudnovsky, Joachim Studnia, Justus Murke, Kartik Khandelwal, Kenneth Chiu, Kevin Riera, Leonard Blier, Leonard Suslian, Leonardo Deschaseaux, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Sophia Yang, Margaret Jennings, Marie Pellat, Marie Torelli, Marjorie Janiewicz, Mathis Felardos, Maxime Darrin, Michael Hoff, Mickaël Seznec, Misha Jessel Kenyon, Nayef Derwiche, Nicolas Carmont Zaragoza, Nicolas Faurie, Nicolas Moreau, Nicolas Schuhl, Nikhil Raghuraman, Niklas Muhs, Olivier de Garrigues, Patricia Rozé, Patricia Wang, Patrick von Platen, Paul Jacob, Pauline Buche, Pavankumar Reddy Muddireddy, Perry Savas, Pierre Stock, Pravesh Agrawal, Renaud de Peretti, Romain Sauvestre, Romain Sinthe, Roman Soletskyi, Sagar Vaze, Sandeep Subramanian, Saurabh Garg, Soham Ghosh, Sylvain Regnier, Szymon Antoniak, Teven Le Scao, Theophile Gervet, Thibault Schueller, Thibaut Lavril, Thomas Wang, Timothée Lacroix, Valeriia Nemychnikova, Wendy Shang, William El Sayed, William Marshall
|
cloud093/distilbert-base-uncased-finetuned-ner
|
cloud093
| 2024-10-17T07:34:16Z | 94 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2024-10-17T02:41:17Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0609
- Precision: 0.9262
- Recall: 0.9369
- F1: 0.9315
- Accuracy: 0.9839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2419 | 1.0 | 878 | 0.0689 | 0.9045 | 0.9198 | 0.9121 | 0.9802 |
| 0.0524 | 2.0 | 1756 | 0.0600 | 0.9208 | 0.9331 | 0.9269 | 0.9830 |
| 0.0304 | 3.0 | 2634 | 0.0609 | 0.9262 | 0.9369 | 0.9315 | 0.9839 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Tokenizers 0.19.1
|
allknowingroger/Qwen2.5-slerp-14B
|
allknowingroger
| 2024-10-17T07:32:57Z | 267 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:merge:Qwen/Qwen2.5-14B-Instruct",
"base_model:v000000/Qwen2.5-Lumen-14B",
"base_model:merge:v000000/Qwen2.5-Lumen-14B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T07:09:59Z |
---
base_model:
- v000000/Qwen2.5-Lumen-14B
- Qwen/Qwen2.5-14B-Instruct
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [v000000/Qwen2.5-Lumen-14B](https://huggingface.co/v000000/Qwen2.5-Lumen-14B)
* [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Qwen/Qwen2.5-14B-Instruct
merge_method: slerp
base_model: v000000/Qwen2.5-Lumen-14B
parameters:
t:
- value: [0, 0, 0.3, 0.4, 0.5, 0.6, 0.5, 0.4, 0.3, 0, 0]
dtype: bfloat16
```
|
tuanpasg/vinallama-7b-history
|
tuanpasg
| 2024-10-17T07:29:39Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T07:14:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
etri-vilab/koala-lightning-700m
|
etri-vilab
| 2024-10-17T07:29:25Z | 618 | 5 |
diffusers
|
[
"diffusers",
"onnx",
"safetensors",
"text-to-image",
"KOALA",
"dataset:Ejafa/ye-pop",
"arxiv:2312.04005",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2024-05-29T06:58:43Z |
---
tags:
- text-to-image
- KOALA
datasets:
- Ejafa/ye-pop
---
<!-- <div align="center">
<img src="https://dl.dropboxusercontent.com/scl/fi/yosvi68jvyarbvymxc4hm/github_logo.png?rlkey=r9ouwcd7cqxjbvio43q9b3djd&dl=1" width="1024px" />
</div> -->
<div align="center">
<img src="https://dl.dropbox.com/scl/fi/e2niisp985i40p7hww0u8/github_logo_v2.png?rlkey=q9bf1qtigka8bdbqmfjbc2rlu&dl=1" width="1024px" />
</div>
<div style="display:flex;justify-content: center">
<a href="https://youngwanlee.github.io/KOALA/"><img src="https://img.shields.io/static/v1?label=Project%20Page&message=Github&color=blue&logo=github-pages"></a>  
<a href="https://github.com/youngwanLEE/sdxl-koala"><img src="https://img.shields.io/static/v1?label=Code&message=Github&color=blue&logo=github"></a>  
<a href="https://arxiv.org/abs/2312.04005"><img src="https://img.shields.io/static/v1?label=Paper&message=Arxiv:KOALA&color=red&logo=arxiv"></a>  
<a href="https://colab.research.google.com/drive/16gBq2J4fo8xCgmWaBvrqnEb-liAz6097?usp=sharing"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Demo In Colab"/>
</a>  
</div>
# KOALA-Lightning-700M Model Card
### Summary
- Trained using a **self-attention-based knowledge distillation** method
- Teacher model: [SDXL-Lightning](https://huggingface.co/ByteDance/SDXL-Lightning)
- Training dataset: a subset of [LAION-POP](https://huggingface.co/datasets/Ejafa/ye-pop) dataset
- Training iteration: 500K with a batch size of 128
- Training GPUs: 4 x NVIDIA A100 (80GB)
## KOALA-Lightning Models
|Model|link|
|:--|:--|
|koala-lightning-1b | https://huggingface.co/etri-vilab/koala-lightning-1b|
|koala-lightning-700m | https://huggingface.co/etri-vilab/koala-lightning-700m|
## Abstract
### TL;DR
> We propose a fast text-to-image model, called KOALA, by compressing SDXL's U-Net and distilling knowledge from SDXL into our model. KOALA-Lightning-700M can generate a 1024x1024 image in 0.66 seconds on an NVIDIA 4090 GPU, which is more than 4x faster than SDXL. KOALA-700M can be used as a cost-effective alternative between SDM and SDXL in limited resources.
<details><summary>FULL abstract</summary>
As text-to-image (T2I) synthesis models increase in size, they demand higher inference costs due to the need for more expensive GPUs with larger memory, which makes it challenging to reproduce these models in addition to the restricted access to training datasets. Our study aims to reduce these inference costs and explores how far the generative capabilities of T2I models can be extended using only publicly available datasets and open-source models. To this end, by using the de facto standard text-to-image model, Stable Diffusion XL (SDXL), we present three key practices in building an efficient T2I model: (1) Knowledge distillation: we explore how to effectively distill the generation capability of SDXL into an efficient U-Net and find that self-attention is the most crucial part. (2) Data: despite fewer samples, high-resolution images with rich captions are more crucial than a larger number of low-resolution images with short captions. (3) Teacher: Step-distilled Teacher allows T2I models to reduce the noising steps. Based on these findings, we build two types of efficient text-to-image models, called KOALA-Turbo &-Lightning, with two compact U-Nets (1B & 700M), reducing the model size up to 54% and 69% of the SDXL U-Net. In particular, the KOALA-Lightning-700M is 4x faster than SDXL while still maintaining satisfactory generation quality. Moreover, unlike SDXL, our KOALA models can generate 1024px high-resolution images on consumer-grade GPUs with 8GB of VRAMs (3060Ti). We believe that our KOALA models will have a significant practical impact, serving as cost-effective alternatives to SDXL for academic researchers and general users in resource-constrained environments.
</details>
<br>
These 1024x1024 samples were generated by KOALA-Lightning-700M using 10 denoising steps in 0.66 seconds on an NVIDIA 4090 GPU.
<div align="center">
<img src="https://dl.dropbox.com/scl/fi/fjpw93dbrl8xc8pwljclb/teaser_final.png?rlkey=6kf216quj6am8y20nduhenva2&dl=1" width="1024px" />
</div>
## Architecture
There are two two types of compressed U-Net, KOALA-1B and KOALA-700M, which are realized by reducing residual blocks and transformer blocks.
<div align="center">
<img src="https://dl.dropboxusercontent.com/scl/fi/5ydeywgiyt1d3njw63dpk/arch.png?rlkey=1p6imbjs4lkmfpcxy153i1a2t&dl=1" width="1024px" />
</div>
### U-Net comparison
| U-Net | SDM-v2.0 | SDXL-Base-1.0 | KOALA-1B | KOALA-700M |
|-------|:----------:|:-----------:|:-----------:|:-------------:|
| Param. | 865M | 2,567M | 1,161M | 782M |
| CKPT size | 3.46GB | 10.3GB | 4.4GB | 3.0GB |
| Tx blocks | [1, 1, 1, 1] | [0, 2, 10] | [0, 2, 6] | [0, 2, 5] |
| Mid block | ✓ | ✓ | ✓ | ✗ |
| Latency | 1.131s | 3.133s | 1.604s | 1.257s |
- Tx menans transformer block and CKPT means the trained checkpoint file.
- We measured latency with FP16-precision, and 25 denoising steps in NVIDIA 4090 GPU (24GB).
- SDM-v2.0 uses 768x768 resolution, while SDXL and KOALA models uses 1024x1024 resolution.
## Latency and memory usage comparison on different GPUs
We measured the inference time of SDXL-Turbo and KOALA-Turbo models at a resolution of 512x512, and other models at 1024x1024, using a variety of consumer-grade GPUs: NVIDIA 3060Ti (8GB), 2080Ti (11GB), and 4090 (24GB). 'OOM' indicates Out-of-Memory. Note that SDXL models cannot operate on the 3060Ti with 8GB VRAM, whereas <b>our KOALA models can run on all GPU types.</b>
<div align="center">
<img src="https://dl.dropbox.com/scl/fi/eif4cuazx64chd2ybm32w/latency_memory_labels_anno_wide.png?rlkey=otev2ujcn1jekvqre5jksg2e5&dl=1" width="1024px" />
</div>
## Key Features
- **Efficient U-Net Architecture**: KOALA models use a simplified U-Net architecture that reduces the model size by up to 54% and 69% respectively compared to its predecessor, Stable Diffusion XL (SDXL).
- **Self-Attention-Based Knowledge Distillation**: The core technique in KOALA focuses on the distillation of self-attention features, which proves crucial for maintaining image generation quality.
## Model Description
- Developed by [ETRI Visual Intelligence Lab](https://huggingface.co/etri-vilab)
- Developer: [Youngwan Lee](https://youngwanlee.github.io/), [Kwanyong Park](https://pkyong95.github.io/), [Yoorhim Cho](https://ofzlo.github.io/), [Young-Ju Lee](https://scholar.google.com/citations?user=6goOQh8AAAAJ&hl=en), [Sung Ju Hwang](http://www.sungjuhwang.com/)
- Model Description: Latent Diffusion based text-to-image generative model. KOALA models uses the same text encoders as [SDXL-Base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) and only replace the denoising U-Net with the compressed U-Nets.
- Teacher model: [SDXL-Lightning](https://huggingface.co/ByteDance/SDXL-Lightning)
- Training dataset: a subset of [LAION-POP](https://huggingface.co/datasets/Ejafa/ye-pop) dataset
- Training iteration: 500K with a batch size of 128
- GPUs: 4 x NVIDIA A100 (80GB)
- Resources for more information: Check out [KOALA report on arXiv](https://arxiv.org/abs/2312.04005) and [project page](https://youngwanlee.github.io/KOALA/).
## Usage with 🤗[Diffusers library](https://github.com/huggingface/diffusers)
The inference code with denoising step 25
```python
import torch
from diffusers import StableDiffusionXLPipeline, EulerDiscreteScheduler
pipe = StableDiffusionXLPipeline.from_pretrained("etri-vilab/koala-lightning-700m", torch_dtype=torch.float16)
pipe = pipe.to("cuda")
# Ensure sampler uses "trailing" timesteps and "sample" prediction type.
pipe.scheduler = EulerDiscreteScheduler.from_config(
pipe.scheduler.config, timestep_spacing="trailing"
)
prompt = "A portrait painting of a Golden Retriever like Leonard da Vinci"
negative = "worst quality, low quality, illustration, low resolution"
image = pipe(prompt=prompt, negative_prompt=negative, guidance_scale=3.5, num_inference_steps=10).images[0]
```
## Uses
### Direct Use
The model is intended for research purposes only. Possible research areas and tasks include
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.
- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Excluded uses are described below.
### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
## Limitations and Bias
- Text Rendering: The models face challenges in rendering long, legible text within images.
- Complex Prompts: KOALA sometimes struggles with complex prompts involving multiple attributes.
- Dataset Dependencies: The current limitations are partially attributed to the characteristics of the training dataset (LAION-aesthetics-V2 6+).
## Citation
```bibtex
@misc{Lee@koala,
title={KOALA: Empirical Lessons Toward Memory-Efficient and Fast Diffusion Models for Text-to-Image Synthesis},
author={Youngwan Lee and Kwanyong Park and Yoorhim Cho and Yong-Ju Lee and Sung Ju Hwang},
year={2023},
eprint={2312.04005},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
naimul011/Orca-2-7b-final
|
naimul011
| 2024-10-17T07:24:22Z | 59 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2024-10-17T07:22:12Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
thdangtr/blip_title_v1.0_e2_p1
|
thdangtr
| 2024-10-17T07:24:09Z | 52 | 0 |
transformers
|
[
"transformers",
"safetensors",
"blip",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-10-17T07:23:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mergekit-community/mergekit-slerp-wphccbj
|
mergekit-community
| 2024-10-17T07:22:51Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Equall/Saul-7B-Base",
"base_model:merge:Equall/Saul-7B-Base",
"base_model:HuggingFaceH4/zephyr-7b-beta",
"base_model:merge:HuggingFaceH4/zephyr-7b-beta",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T07:17:40Z |
---
base_model:
- HuggingFaceH4/zephyr-7b-beta
- Equall/Saul-Base
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
* [Equall/Saul-Base](https://huggingface.co/Equall/Saul-Base)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: Equall/Saul-Base
layer_range: [0, 32]
- model: HuggingFaceH4/zephyr-7b-beta
layer_range: [0, 32]
merge_method: slerp
base_model: HuggingFaceH4/zephyr-7b-beta
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
mahmoudkamal105/paligemma5000
|
mahmoudkamal105
| 2024-10-17T07:20:34Z | 5 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"paligemma",
"image-text-to-text",
"generated_from_trainer",
"base_model:google/paligemma-3b-pt-448",
"base_model:finetune:google/paligemma-3b-pt-448",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2024-10-16T08:01:10Z |
---
library_name: transformers
license: gemma
base_model: google/paligemma-3b-pt-448
tags:
- generated_from_trainer
model-index:
- name: paligemma5000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paligemma5000
This model is a fine-tuned version of [google/paligemma-3b-pt-448](https://huggingface.co/google/paligemma-3b-pt-448) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 2
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
Huan69/Belle-whisper-large-v3-zh-punct-fasterwhisper
|
Huan69
| 2024-10-17T07:19:23Z | 233 | 2 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2024-10-16T08:40:04Z |
---
license: apache-2.0
---
## Introduction
This model is a modified version of **Belle-whisper-large-v3-zh-punct**, which enhances Chinese punctuation mark capabilities while maintaining strong performance on Chinese ASR benchmarks. The modifications were made to suit specific use cases.
### Citation
If you use this model, please cite the original work:
```bibtex
@misc{BELLE,
author = {BELLEGroup},
title = {BELLE: Be Everyone's Large Language model Engine},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/LianjiaTech/BELLE}},
}
```
Original repositories:
- https://github.com/LianjiaTech/BELLE
- https://github.com/shuaijiang/Whisper-Finetune
|
mateiaassAI/teacher_redv2
|
mateiaassAI
| 2024-10-17T07:18:58Z | 84 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dumitrescustefan/bert-base-romanian-cased-v1",
"base_model:finetune:dumitrescustefan/bert-base-romanian-cased-v1",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2024-10-16T15:41:34Z |
---
library_name: transformers
license: mit
base_model: dumitrescustefan/bert-base-romanian-cased-v1
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: teacher_redv2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# teacher_redv2
This model is a fine-tuned version of [dumitrescustefan/bert-base-romanian-cased-v1](https://huggingface.co/dumitrescustefan/bert-base-romanian-cased-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2782
- F1: 0.6913
- Roc Auc: 0.8098
- Accuracy: 0.5764
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| No log | 1.0 | 256 | 0.2588 | 0.6200 | 0.7477 | 0.5046 |
| 0.2876 | 2.0 | 512 | 0.2415 | 0.6775 | 0.7939 | 0.5893 |
| 0.2876 | 3.0 | 768 | 0.2473 | 0.6853 | 0.8021 | 0.5948 |
| 0.146 | 4.0 | 1024 | 0.2523 | 0.6885 | 0.7969 | 0.5985 |
| 0.146 | 5.0 | 1280 | 0.2655 | 0.6846 | 0.7975 | 0.5875 |
| 0.083 | 6.0 | 1536 | 0.2778 | 0.6930 | 0.8110 | 0.5838 |
| 0.083 | 7.0 | 1792 | 0.2782 | 0.6913 | 0.8098 | 0.5764 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
uinsuska/sd-class-butterflies-32
|
uinsuska
| 2024-10-17T07:14:53Z | 37 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] |
unconditional-image-generation
| 2024-10-17T07:14:11Z |
---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('uinsuska/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
Serione/opt-125m-5
|
Serione
| 2024-10-17T07:14:42Z | 187 | 0 |
transformers
|
[
"transformers",
"safetensors",
"opt",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-16T16:42:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
enginia/tiny_fsdp_dbc_171024
|
enginia
| 2024-10-17T07:02:10Z | 131 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T06:59:47Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Qevacot-7B-v2-GGUF
|
mradermacher
| 2024-10-17T06:52:06Z | 17 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/Qevacot-7B-v2",
"base_model:quantized:bunnycore/Qevacot-7B-v2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-16T21:17:29Z |
---
base_model: bunnycore/Qevacot-7B-v2
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/bunnycore/Qevacot-7B-v2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qevacot-7B-v2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.IQ4_XS.gguf) | IQ4_XS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qevacot-7B-v2-GGUF/resolve/main/Qevacot-7B-v2.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Jerry666/GOT-OCR2_0-716M-BF16-GGUF
|
Jerry666
| 2024-10-17T06:51:44Z | 232 | 0 | null |
[
"gguf",
"endpoints_compatible",
"region:us"
] | null | 2024-10-16T09:09:44Z |
# Release
- 2024.10.16: [GOT-OCR2_0-716M-BF16-GGUF](https://huggingface.co/Jerry666/GOT-OCR2_0-716M-BF16-GGUF)
# Description
[gguf-py](https://github.com/jerrylsu/gguf-py) is a Python package for writing binary files in the GGUF based on llama_cpp.
# Usage
`
python convert_hf_to_gguf.py --outtype bf16 --model ~/GOT-OCR2_0 --outfile ~/output/GOT-OCR2_0-GGUF
`
# Adding Supported Model
[GOT_OCR2](https://huggingface.co/stepfun-ai/GOT-OCR2_0)
Continue...
# References
[llama.cpp](https://github.com/ggerganov/llama.cpp): LLM inference in C/C++.
[GOT-OCR2.0](https://github.com/Ucas-HaoranWei/GOT-OCR2.0): Official code implementation of General OCR Theory: Towards OCR-2.0 via a Unified End-to-end Model.
|
unsloth/Llama-3.1-Nemotron-70B-Instruct-GGUF
|
unsloth
| 2024-10-17T06:49:07Z | 113 | 1 |
transformers
|
[
"transformers",
"gguf",
"nvidia",
"llama3.1",
"unsloth",
"llama",
"text-generation",
"en",
"dataset:nvidia/HelpSteer2",
"base_model:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF",
"base_model:quantized:nvidia/Llama-3.1-Nemotron-70B-Instruct-HF",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-10-17T05:19:03Z |
---
base_model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
datasets:
- nvidia/HelpSteer2
language:
- en
library_name: transformers
license: llama3.1
pipeline_tag: text-generation
tags:
- nvidia
- llama3.1
- unsloth
- llama
---
# Finetune Llama 3.2, NVIDIA Nemotron, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
# unsloth/Llama-3.1-Nemotron-70B-Instruct-GGUF
For more details on the model, please go to NVIDIA's original [model card](https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Ys44kVvmeZtnICzWz0xgpRnrIOjZAuxp?usp=sharing) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_?usp=sharing) | 2.2x faster | 62% less |
| **DPO - Zephyr** | [▶️ Start on Colab](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) | 1.9x faster | 19% less |
- This [conversational notebook](https://colab.research.google.com/drive/1Aau3lgPzeZKQ-98h69CCu1UJcvIBLmy2?usp=sharing) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/drive/1ef-tab5bhkvWmBOObepl1WgJvfvSzn5Q?usp=sharing) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the Meta and Llama team for creating these models and for NVIDIA fine-tuning them and releasing them.
|
Envoid/Llama-3.05-Nemotron-Tenyxchat-Storybreaker-70B
|
Envoid
| 2024-10-17T06:47:48Z | 12 | 1 | null |
[
"safetensors",
"llama",
"merge",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-10-17T02:55:26Z |
---
license: cc-by-nc-4.0
tags:
- merge
---

# Llama-3.05-Nemotron-Tenyxchat-Storybreaker-70B
is a 40/60 SLERP Merge of [Envoid/Llama-3-TenyxChat-DaybreakStorywriter-70B](https://huggingface.co/Envoid/Llama-3-TenyxChat-DaybreakStorywriter-70B?not-for-all-audiences=true) onto [nvidia/Llama-3.1-Nemotron-70B-Instruct-HF](https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF) utilizing the following config:
```
models:
- model: ./Envoid_Llama-3-TenyxChat-DaybreakStorywriter-70B
- model: ./nvidia_Llama-3.1-Nemotron-70B-Instruct-HF
merge_method: slerp
base_model: ./nvidia_Llama-3.1-Nemotron-70B-Instruct-HF
parameters:
t:
- value: 0.4
dtype: bfloat16
```
## Caution: As is always the case with SLERP merges there may be edge cases inwhich certain unintended model behaviors emerge. So always use with caution.
The 'sloppiness' of Nemotron seems to be somewhat reigned in (but still exists) while maintaining its personable assistant personality and safety (In assistant mode it will still prompt you with a warning before producing sensitive content).
Overall it provides a solid option for RP and creative writing while still functioning as an assistant model, if desired. If used to continue a roleplay it will generally follow the ongoing cadence of the conversation.
### It utilizes the Llama 3 prompt format.
|
QuantFactory/Ministral-3b-instruct-GGUF
|
QuantFactory
| 2024-10-17T06:41:32Z | 658 | 8 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2024-10-17T06:25:04Z |
---
library_name: transformers
inference:
parameters:
temperature: 1
top_p: 0.95
top_k: 40
repetition_penalty: 1.2
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
[](https://hf.co/QuantFactory)
# QuantFactory/Ministral-3b-instruct-GGUF
This is quantized version of [ministral/Ministral-3b-instruct](https://huggingface.co/ministral/Ministral-3b-instruct) created using llama.cpp
# Original Model Card

### Model Description
<!-- Provide a longer summary of what this model is. -->
Ministral is a series of language model, build with same architecture as the famous Mistral model, but with less size.
- **Model type:** A 3B parameter GPT-like model fine-tuned on a mix of publicly available, synthetic datasets.
- **Language(s) (NLP):** Primarily English
- **License:** Apache 2.0
- **Finetuned from model:** [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
|
BroAlanTaps/GPT2-large-4-26000steps
|
BroAlanTaps
| 2024-10-17T06:39:53Z | 135 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T06:38:14Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sneha2803/bart_model
|
sneha2803
| 2024-10-17T06:38:31Z | 176 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2024-10-17T06:18:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/SthenoMix3.3-GGUF
|
mradermacher
| 2024-10-17T06:32:05Z | 16 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:mergekit-community/SthenoMix3.3",
"base_model:quantized:mergekit-community/SthenoMix3.3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-16T21:27:15Z |
---
base_model: mergekit-community/SthenoMix3.3
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mergekit-community/SthenoMix3.3
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/SthenoMix3.3-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SthenoMix3.3-GGUF/resolve/main/SthenoMix3.3.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/SthenoMix3.3-GGUF/resolve/main/SthenoMix3.3.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/SthenoMix3.3-GGUF/resolve/main/SthenoMix3.3.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SthenoMix3.3-GGUF/resolve/main/SthenoMix3.3.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/SthenoMix3.3-GGUF/resolve/main/SthenoMix3.3.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/SthenoMix3.3-GGUF/resolve/main/SthenoMix3.3.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SthenoMix3.3-GGUF/resolve/main/SthenoMix3.3.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SthenoMix3.3-GGUF/resolve/main/SthenoMix3.3.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/SthenoMix3.3-GGUF/resolve/main/SthenoMix3.3.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/SthenoMix3.3-GGUF/resolve/main/SthenoMix3.3.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SthenoMix3.3-GGUF/resolve/main/SthenoMix3.3.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SthenoMix3.3-GGUF/resolve/main/SthenoMix3.3.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/QevaCoT-7B-Stock-i1-GGUF
|
mradermacher
| 2024-10-17T06:32:05Z | 963 | 1 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:bunnycore/QevaCoT-7B-Stock",
"base_model:quantized:bunnycore/QevaCoT-7B-Stock",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2024-10-17T05:32:51Z |
---
base_model: bunnycore/QevaCoT-7B-Stock
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/bunnycore/QevaCoT-7B-Stock
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/QevaCoT-7B-Stock-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-IQ1_S.gguf) | i1-IQ1_S | 2.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-IQ1_M.gguf) | i1-IQ1_M | 2.1 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-IQ2_S.gguf) | i1-IQ2_S | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-IQ2_M.gguf) | i1-IQ2_M | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q2_K.gguf) | i1-Q2_K | 3.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-IQ3_S.gguf) | i1-IQ3_S | 3.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-IQ3_M.gguf) | i1-IQ3_M | 3.7 | |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.9 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.5 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.5 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q4_0.gguf) | i1-Q4_0 | 4.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/QevaCoT-7B-Stock-i1-GGUF/resolve/main/QevaCoT-7B-Stock.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
jjaegii/Llama-3.1-8B-LoRA-kolon-sg-v2-merged-GPTQ-INT4
|
jjaegii
| 2024-10-17T06:25:49Z | 5 | 0 |
transformers
|
[
"transformers",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2024-10-17T03:00:23Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q6_K-GGUF
|
Triangle104
| 2024-10-17T06:20:57Z | 7 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2",
"base_model:quantized:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-17T06:19:08Z |
---
base_model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q6_K-GGUF
This model was converted to GGUF format from [`ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2`](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2) for more details on the model.
---
Model details:
-
UPDATE: For those getting gibberish results, it was merged wrongly to base after LORA training. Reuploaded all the files so it should work properly now.
RPMax is a series of models that are trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations.
Early tests by users mentioned that these models does not feel like any other RP models, having a different style and generally doesn't feel in-bred.
You can access the model at https://arliai.com and ask questions at https://www.reddit.com/r/ArliAI/
We also have a models ranking page at https://www.arliai.com/models-ranking
Ask questions in our new Discord Server! https://discord.com/invite/t75KbPgwhk
Model Description
ArliAI-RPMax-12B-v1.2 is a variant based on Mistral Nemo 12B Instruct 2407.
This is arguably the most successful RPMax model due to how Mistral is already very uncensored in the first place.
v1.2 update completely removes non-creative/RP examples in the dataset and is also an incremental improvement of the RPMax dataset which dedups the dataset even more and better filtering to cutout irrelevant description text that came from card sharing sites.
Specs
Context Length: 128K
Parameters: 12B
Training Details
Sequence Length: 8192
Training Duration: Approximately 2 days on 2x3090Ti
Epochs: 1 epoch training for minimized repetition sickness
LORA: 64-rank 128-alpha, resulting in ~2% trainable weights
Learning Rate: 0.00001
Gradient accumulation: Very low 32 for better learning.
Quantization
The model is available in quantized formats:
FP16: https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2
GGUF: https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-GGUF
Suggested Prompt Format
Mistral Instruct Prompt Format
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q6_K-GGUF --hf-file mistral-nemo-12b-arliai-rpmax-v1.2-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q6_K-GGUF --hf-file mistral-nemo-12b-arliai-rpmax-v1.2-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q6_K-GGUF --hf-file mistral-nemo-12b-arliai-rpmax-v1.2-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q6_K-GGUF --hf-file mistral-nemo-12b-arliai-rpmax-v1.2-q6_k.gguf -c 2048
```
|
Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q4_K_M-GGUF
|
Triangle104
| 2024-10-17T06:11:58Z | 9 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2",
"base_model:quantized:ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-10-17T05:58:15Z |
---
base_model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q4_K_M-GGUF
This model was converted to GGUF format from [`ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2`](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2) for more details on the model.
---
Model details:
-
UPDATE: For those getting gibberish results, it was merged wrongly to base after LORA training. Reuploaded all the files so it should work properly now.
RPMax is a series of models that are trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations.
Early tests by users mentioned that these models does not feel like any other RP models, having a different style and generally doesn't feel in-bred.
You can access the model at https://arliai.com and ask questions at https://www.reddit.com/r/ArliAI/
We also have a models ranking page at https://www.arliai.com/models-ranking
Ask questions in our new Discord Server! https://discord.com/invite/t75KbPgwhk
Model Description
ArliAI-RPMax-12B-v1.2 is a variant based on Mistral Nemo 12B Instruct 2407.
This is arguably the most successful RPMax model due to how Mistral is already very uncensored in the first place.
v1.2 update completely removes non-creative/RP examples in the dataset and is also an incremental improvement of the RPMax dataset which dedups the dataset even more and better filtering to cutout irrelevant description text that came from card sharing sites.
Specs
Context Length: 128K
Parameters: 12B
Training Details
Sequence Length: 8192
Training Duration: Approximately 2 days on 2x3090Ti
Epochs: 1 epoch training for minimized repetition sickness
LORA: 64-rank 128-alpha, resulting in ~2% trainable weights
Learning Rate: 0.00001
Gradient accumulation: Very low 32 for better learning.
Quantization
The model is available in quantized formats:
FP16: https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2
GGUF: https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-GGUF
Suggested Prompt Format
Mistral Instruct Prompt Format
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q4_K_M-GGUF --hf-file mistral-nemo-12b-arliai-rpmax-v1.2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q4_K_M-GGUF --hf-file mistral-nemo-12b-arliai-rpmax-v1.2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q4_K_M-GGUF --hf-file mistral-nemo-12b-arliai-rpmax-v1.2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Mistral-Nemo-12B-ArliAI-RPMax-v1.2-Q4_K_M-GGUF --hf-file mistral-nemo-12b-arliai-rpmax-v1.2-q4_k_m.gguf -c 2048
```
|
Aldrich12/my-fine-tuned-model-ppo
|
Aldrich12
| 2024-10-17T06:10:00Z | 202 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T06:08:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gkMSDA/FinChat298_Solar248M_Pretrain_DJ30_Model_V2
|
gkMSDA
| 2024-10-17T06:04:30Z | 132 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-10-17T06:03:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
softwareweaver/Twilight-Large-123B-EXL2-5bpw
|
softwareweaver
| 2024-10-17T05:57:27Z | 9 | 1 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:TheDrummer/Behemoth-123B-v1",
"base_model:merge:TheDrummer/Behemoth-123B-v1",
"base_model:mistralai/Mistral-Large-Instruct-2407",
"base_model:merge:mistralai/Mistral-Large-Instruct-2407",
"base_model:schnapper79/lumikabra-123B_v0.4",
"base_model:merge:schnapper79/lumikabra-123B_v0.4",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] |
text-generation
| 2024-10-17T04:29:50Z |
---
base_model:
- schnapper79/lumikabra-123B_v0.4
- mistralai/Mistral-Large-Instruct-2407
- TheDrummer/Behemoth-123B-v1
library_name: transformers
tags:
- mergekit
- merge
license: other
---
# Twilight-Large
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit) by @softwareweaver. Use the prompt format that Mistral Large uses.
## Merge Details
### Merge Method
This model was merged using the della_linear merge method using [mistralai/Mistral-Large-Instruct-2407](https://huggingface.co/mistralai/Mistral-Large-Instruct-2407) as a base.
### Models Merged
The following models were included in the merge:
* [schnapper79/lumikabra-123B_v0.4](https://huggingface.co/schnapper79/lumikabra-123B_v0.4)
* [TheDrummer/Behemoth-123B-v1](https://huggingface.co/TheDrummer/Behemoth-123B-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TheDrummer/Behemoth-123B-v1
parameters:
weight: 0.25
density: 0.9
- model: schnapper79/lumikabra-123B_v0.4
parameters:
weight: 0.3
density: 0.9
merge_method: della_linear
base_model: mistralai/Mistral-Large-Instruct-2407
parameters:
epsilon: 0.05
lambda: 1
int8_mask: true
dtype: bfloat16
```
|
Falah/haider_al_abadi
|
Falah
| 2024-10-17T05:49:24Z | 7 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2024-10-17T04:43:39Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Haider_Al_Abadi
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Falah/haider_al_abadi', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
BEGADE/bot
|
BEGADE
| 2024-10-17T05:18:18Z | 6 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"license:mit",
"region:us"
] | null | 2024-10-17T04:54:50Z |
---
base_model: openai-community/gpt2
library_name: peft
license: mit
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: bot
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bot
This model is a fine-tuned version of [openai-community/gpt2](https://huggingface.co/openai-community/gpt2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
JumF/jum4
|
JumF
| 2024-10-17T05:17:23Z | 23 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2024-10-17T05:17:07Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/a_photo_of_Jum(16).jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Jum
---
# jum4
<Gallery />
## Model description
upload van Jum
## Trigger words
You should use `Jum` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/JumF/jum4/tree/main) them in the Files & versions tab.
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.