Search is not available for this dataset
modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-04-11 06:26:25
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 420
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-04-11 06:25:59
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mradermacher/arco-chat-v0.1-GGUF | mradermacher | "2025-02-12T09:56:47Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"en",
"dataset:stingning/ultrachat",
"base_model:appvoid/arco-chat-v0.1",
"base_model:quantized:appvoid/arco-chat-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-02-12T09:54:08Z" | ---
base_model: appvoid/arco-chat-v0.1
datasets:
- stingning/ultrachat
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/appvoid/arco-chat-v0.1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/arco-chat-v0.1-GGUF/resolve/main/arco-chat-v0.1.Q2_K.gguf) | Q2_K | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/arco-chat-v0.1-GGUF/resolve/main/arco-chat-v0.1.Q3_K_S.gguf) | Q3_K_S | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/arco-chat-v0.1-GGUF/resolve/main/arco-chat-v0.1.Q3_K_M.gguf) | Q3_K_M | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/arco-chat-v0.1-GGUF/resolve/main/arco-chat-v0.1.Q3_K_L.gguf) | Q3_K_L | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/arco-chat-v0.1-GGUF/resolve/main/arco-chat-v0.1.IQ4_XS.gguf) | IQ4_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/arco-chat-v0.1-GGUF/resolve/main/arco-chat-v0.1.Q4_K_S.gguf) | Q4_K_S | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/arco-chat-v0.1-GGUF/resolve/main/arco-chat-v0.1.Q4_K_M.gguf) | Q4_K_M | 0.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/arco-chat-v0.1-GGUF/resolve/main/arco-chat-v0.1.Q5_K_S.gguf) | Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/arco-chat-v0.1-GGUF/resolve/main/arco-chat-v0.1.Q5_K_M.gguf) | Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/arco-chat-v0.1-GGUF/resolve/main/arco-chat-v0.1.Q6_K.gguf) | Q6_K | 0.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/arco-chat-v0.1-GGUF/resolve/main/arco-chat-v0.1.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/arco-chat-v0.1-GGUF/resolve/main/arco-chat-v0.1.f16.gguf) | f16 | 1.1 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Ghaythfd/Llama3.1_8b_finetuned_revised_v1.1 | Ghaythfd | "2024-11-15T11:13:06Z" | 10 | 0 | null | [
"gguf",
"llama",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-15T10:05:04Z" | ---
license: apache-2.0
---
|
KeLoPa/Llama-3-8B-Instruct-Finance-RAG4 | KeLoPa | "2024-10-03T15:03:28Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-10-03T15:00:47Z" | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
huggingface-course/mt5-finetuned-amazon-en-es | huggingface-course | "2023-12-20T22:11:29Z" | 31 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
base_model: google/mt5-small
model-index:
- name: mt5-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mt5-finetuned-amazon-en-es
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0285
- Rouge1: 16.9728
- Rouge2: 8.2969
- Rougel: 16.8366
- Rougelsum: 16.851
- Gen Len: 10.1597
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| 7.1016 | 1.0 | 1209 | 3.3069 | 13.9858 | 5.8437 | 13.6053 | 13.5125 | 8.3782 |
| 3.898 | 2.0 | 2418 | 3.1567 | 16.6706 | 8.6393 | 16.2882 | 16.2249 | 9.7521 |
| 3.5915 | 3.0 | 3627 | 3.0928 | 17.111 | 8.3921 | 16.9139 | 16.7805 | 10.3445 |
| 3.4174 | 4.0 | 4836 | 3.0482 | 16.9728 | 8.3066 | 16.8868 | 16.8485 | 10.3151 |
| 3.3258 | 5.0 | 6045 | 3.0375 | 16.5972 | 8.2621 | 16.3524 | 16.3093 | 10.0672 |
| 3.2427 | 6.0 | 7254 | 3.0232 | 17.3009 | 8.6087 | 17.0782 | 17.0105 | 10.0756 |
| 3.2009 | 7.0 | 8463 | 3.0302 | 16.9284 | 8.6569 | 16.7885 | 16.7784 | 10.2143 |
| 3.1838 | 8.0 | 9672 | 3.0285 | 16.9728 | 8.2969 | 16.8366 | 16.851 | 10.1597 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.9.1+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
|
prashantloni/lilt-en-aadhaar-red | prashantloni | "2024-04-24T13:43:13Z" | 108 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"lilt",
"token-classification",
"generated_from_trainer",
"base_model:SCUT-DLVCLab/lilt-roberta-en-base",
"base_model:finetune:SCUT-DLVCLab/lilt-roberta-en-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-04-24T13:18:41Z" | ---
license: mit
base_model: SCUT-DLVCLab/lilt-roberta-en-base
tags:
- generated_from_trainer
model-index:
- name: lilt-en-aadhaar-red
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lilt-en-aadhaar-red
This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0287
- Adhaar Number: {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39}
- Ame: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 23}
- Ather Name: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2}
- Ather Name Back: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19}
- Ather Name Front Top: {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11}
- Ddress Back: {'precision': 0.9512195121951219, 'recall': 0.9629629629629629, 'f1': 0.9570552147239264, 'number': 81}
- Ddress Front: {'precision': 0.9615384615384616, 'recall': 0.9615384615384616, 'f1': 0.9615384615384616, 'number': 52}
- Ender: {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21}
- Ob: {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21}
- Obile Number: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10}
- Ther: {'precision': 0.958974358974359, 'recall': 0.9689119170984456, 'f1': 0.9639175257731959, 'number': 193}
- Overall Precision: 0.9623
- Overall Recall: 0.9725
- Overall F1: 0.9673
- Overall Accuracy: 0.9973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Adhaar Number | Ame | Ather Name | Ather Name Back | Ather Name Front Top | Ddress Back | Ddress Front | Ender | Ob | Obile Number | Ther | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| 0.1651 | 10.0 | 200 | 0.0226 | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 39} | {'precision': 0.9130434782608695, 'recall': 0.9130434782608695, 'f1': 0.9130434782608695, 'number': 23} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | {'precision': 0.926829268292683, 'recall': 0.9382716049382716, 'f1': 0.9325153374233128, 'number': 81} | {'precision': 0.9811320754716981, 'recall': 1.0, 'f1': 0.9904761904761905, 'number': 52} | {'precision': 0.9047619047619048, 'recall': 0.9047619047619048, 'f1': 0.9047619047619048, 'number': 21} | {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9384615384615385, 'recall': 0.9481865284974094, 'f1': 0.9432989690721649, 'number': 193} | 0.9497 | 0.9597 | 0.9547 | 0.9962 |
| 0.004 | 20.0 | 400 | 0.0270 | {'precision': 0.9487179487179487, 'recall': 0.9487179487179487, 'f1': 0.9487179487179487, 'number': 39} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 23} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | {'precision': 0.926829268292683, 'recall': 0.9382716049382716, 'f1': 0.9325153374233128, 'number': 81} | {'precision': 0.9615384615384616, 'recall': 0.9615384615384616, 'f1': 0.9615384615384616, 'number': 52} | {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21} | {'precision': 0.9090909090909091, 'recall': 0.9523809523809523, 'f1': 0.9302325581395349, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9333333333333333, 'recall': 0.9430051813471503, 'f1': 0.9381443298969072, 'number': 193} | 0.9454 | 0.9534 | 0.9494 | 0.9964 |
| 0.0016 | 30.0 | 600 | 0.0321 | {'precision': 0.925, 'recall': 0.9487179487179487, 'f1': 0.9367088607594937, 'number': 39} | {'precision': 0.9565217391304348, 'recall': 0.9565217391304348, 'f1': 0.9565217391304348, 'number': 23} | {'precision': 0.6666666666666666, 'recall': 1.0, 'f1': 0.8, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | {'precision': 0.9146341463414634, 'recall': 0.9259259259259259, 'f1': 0.9202453987730062, 'number': 81} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 52} | {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21} | {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9282051282051282, 'recall': 0.9378238341968912, 'f1': 0.9329896907216495, 'number': 193} | 0.9414 | 0.9534 | 0.9474 | 0.9959 |
| 0.0013 | 40.0 | 800 | 0.0243 | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 23} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | {'precision': 0.9390243902439024, 'recall': 0.9506172839506173, 'f1': 0.9447852760736196, 'number': 81} | {'precision': 0.9803921568627451, 'recall': 0.9615384615384616, 'f1': 0.970873786407767, 'number': 52} | {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21} | {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9487179487179487, 'recall': 0.9585492227979274, 'f1': 0.9536082474226804, 'number': 193} | 0.96 | 0.9661 | 0.9630 | 0.9973 |
| 0.0006 | 50.0 | 1000 | 0.0400 | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 23} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 1.0, 'recall': 0.8947368421052632, 'f1': 0.9444444444444444, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | {'precision': 0.8902439024390244, 'recall': 0.9012345679012346, 'f1': 0.8957055214723927, 'number': 81} | {'precision': 0.9803921568627451, 'recall': 0.9615384615384616, 'f1': 0.970873786407767, 'number': 52} | {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21} | {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9384615384615385, 'recall': 0.9481865284974094, 'f1': 0.9432989690721649, 'number': 193} | 0.9471 | 0.9492 | 0.9481 | 0.9951 |
| 0.0003 | 60.0 | 1200 | 0.0323 | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 0.9565217391304348, 'recall': 0.9565217391304348, 'f1': 0.9565217391304348, 'number': 23} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19} | {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11} | {'precision': 0.926829268292683, 'recall': 0.9382716049382716, 'f1': 0.9325153374233128, 'number': 81} | {'precision': 0.9423076923076923, 'recall': 0.9423076923076923, 'f1': 0.9423076923076923, 'number': 52} | {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21} | {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9384615384615385, 'recall': 0.9481865284974094, 'f1': 0.9432989690721649, 'number': 193} | 0.9455 | 0.9555 | 0.9505 | 0.9964 |
| 0.0005 | 70.0 | 1400 | 0.0287 | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 23} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19} | {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11} | {'precision': 0.9512195121951219, 'recall': 0.9629629629629629, 'f1': 0.9570552147239264, 'number': 81} | {'precision': 0.9615384615384616, 'recall': 0.9615384615384616, 'f1': 0.9615384615384616, 'number': 52} | {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21} | {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.958974358974359, 'recall': 0.9689119170984456, 'f1': 0.9639175257731959, 'number': 193} | 0.9623 | 0.9725 | 0.9673 | 0.9973 |
| 0.0004 | 80.0 | 1600 | 0.0417 | {'precision': 0.9487179487179487, 'recall': 0.9487179487179487, 'f1': 0.9487179487179487, 'number': 39} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 23} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19} | {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11} | {'precision': 0.9036144578313253, 'recall': 0.9259259259259259, 'f1': 0.9146341463414634, 'number': 81} | {'precision': 0.9607843137254902, 'recall': 0.9423076923076923, 'f1': 0.9514563106796117, 'number': 52} | {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21} | {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9285714285714286, 'recall': 0.9430051813471503, 'f1': 0.9357326478149101, 'number': 193} | 0.9393 | 0.9513 | 0.9453 | 0.9951 |
| 0.0001 | 90.0 | 1800 | 0.0362 | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 23} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | {'precision': 0.9146341463414634, 'recall': 0.9259259259259259, 'f1': 0.9202453987730062, 'number': 81} | {'precision': 0.9803921568627451, 'recall': 0.9615384615384616, 'f1': 0.970873786407767, 'number': 52} | {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21} | {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9384615384615385, 'recall': 0.9481865284974094, 'f1': 0.9432989690721649, 'number': 193} | 0.9516 | 0.9576 | 0.9546 | 0.9964 |
| 0.0001 | 100.0 | 2000 | 0.0378 | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 23} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | {'precision': 0.9146341463414634, 'recall': 0.9259259259259259, 'f1': 0.9202453987730062, 'number': 81} | {'precision': 0.9615384615384616, 'recall': 0.9615384615384616, 'f1': 0.9615384615384616, 'number': 52} | {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21} | {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9336734693877551, 'recall': 0.9481865284974094, 'f1': 0.9408740359897172, 'number': 193} | 0.9476 | 0.9576 | 0.9526 | 0.9962 |
| 0.0001 | 110.0 | 2200 | 0.0379 | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 0.9565217391304348, 'recall': 0.9565217391304348, 'f1': 0.9565217391304348, 'number': 23} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | {'precision': 0.9146341463414634, 'recall': 0.9259259259259259, 'f1': 0.9202453987730062, 'number': 81} | {'precision': 0.9615384615384616, 'recall': 0.9615384615384616, 'f1': 0.9615384615384616, 'number': 52} | {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21} | {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9285714285714286, 'recall': 0.9430051813471503, 'f1': 0.9357326478149101, 'number': 193} | 0.9434 | 0.9534 | 0.9484 | 0.9959 |
| 0.0001 | 120.0 | 2400 | 0.0361 | {'precision': 0.9743589743589743, 'recall': 0.9743589743589743, 'f1': 0.9743589743589743, 'number': 39} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 23} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 19} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | {'precision': 0.9146341463414634, 'recall': 0.9259259259259259, 'f1': 0.9202453987730062, 'number': 81} | {'precision': 0.9615384615384616, 'recall': 0.9615384615384616, 'f1': 0.9615384615384616, 'number': 52} | {'precision': 0.9523809523809523, 'recall': 0.9523809523809523, 'f1': 0.9523809523809523, 'number': 21} | {'precision': 0.9545454545454546, 'recall': 1.0, 'f1': 0.9767441860465117, 'number': 21} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 10} | {'precision': 0.9336734693877551, 'recall': 0.9481865284974094, 'f1': 0.9408740359897172, 'number': 193} | 0.9476 | 0.9576 | 0.9526 | 0.9962 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
imagepipeline/Majicmix-lux | imagepipeline | "2024-04-16T05:17:29Z" | 43 | 0 | diffusers | [
"diffusers",
"imagepipeline",
"imagepipeline.io",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-04-16T05:16:20Z" | ---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
## Majicmix-lux
<img src="" alt="Generated on Image Pipeline" style="border-radius: 10px;">
**This checkpoint model is uploaded on [imagepipeline.io](https://imagepipeline.io/)**
Model details -
[](https://imagepipeline.io/models/Majicmix-lux?id=ccd867a7-ee2b-49a9-9387-ef2f17133a21/)
## How to try this model ?
You can try using it locally or send an API call to test the output quality.
Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/). No payment required.
Coding in `php` `javascript` `node` etc ? Checkout our documentation
[](https://docs.imagepipeline.io/docs/introduction)
```python
import requests
import json
url = "https://imagepipeline.io/sd/text2image/v1/run"
payload = json.dumps({
"model_id": "ccd867a7-ee2b-49a9-9387-ef2f17133a21",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": false,
"guidance_scale": 7.5,
"multi_lingual": "no",
"embeddings": "",
"lora_models": "",
"lora_weights": ""
})
headers = {
'Content-Type': 'application/json',
'API-Key': 'your_api_key'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
}
```
Get more ready to use `MODELS` like this for `SD 1.5` and `SDXL` :
[](https://imagepipeline.io/models)
### API Reference
#### Generate Image
```http
https://api.imagepipeline.io/sd/text2image/v1
```
| Headers | Type | Description |
|:----------------------| :------- |:-------------------------------------------------------------------------------------------------------------------|
| `API-Key` | `str` | Get your `API_KEY` from [imagepipeline.io](https://imagepipeline.io/) |
| `Content-Type` | `str` | application/json - content type of the request body |
| Parameter | Type | Description |
| :-------- | :------- | :------------------------- |
| `model_id` | `str` | Your base model, find available lists in [models page](https://imagepipeline.io/models) or upload your own|
| `prompt` | `str` | Text Prompt. Check our [Prompt Guide](https://docs.imagepipeline.io/docs/SD-1.5/docs/extras/prompt-guide) for tips |
| `num_inference_steps` | `int [1-50]` | Noise is removed with each step, resulting in a higher-quality image over time. Ideal value 30-50 (without LCM) |
| `guidance_scale` | `float [1-20]` | Higher guidance scale prioritizes text prompt relevance but sacrifices image quality. Ideal value 7.5-12.5 |
| `lora_models` | `str, array` | Pass the model_id(s) of LoRA models that can be found in models page |
| `lora_weights` | `str, array` | Strength of the LoRA effect |
---
license: creativeml-openrail-m
tags:
- imagepipeline
- imagepipeline.io
- text-to-image
- ultra-realistic
pinned: false
pipeline_tag: text-to-image
---
### Feedback
If you have any feedback, please reach out to us at [email protected]
#### 🔗 Visit Website
[](https://imagepipeline.io/)
If you are the original author of this model, please [click here](https://airtable.com/apprTaRnJbDJ8ufOx/shr4g7o9B6fWfOlUR) to add credits
|
RichardErkhov/xvadov01_-_microcoderfim-1B-awq | RichardErkhov | "2025-01-06T10:54:13Z" | 6 | 0 | null | [
"safetensors",
"gpt_bigcode",
"arxiv:2207.14255",
"arxiv:1910.09700",
"4-bit",
"awq",
"region:us"
] | null | "2025-01-06T10:53:43Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
microcoderfim-1B - AWQ
- Model creator: https://huggingface.co/xvadov01/
- Original model: https://huggingface.co/xvadov01/microcoderfim-1B/
Original model description:
---
library_name: transformers
license: mit
language:
- en
metrics:
- bleu
- code_eval
- rouge
- chrf
model_name: MicroCoderFIM-1B
base_model: bigcode/starcoderbase-1b
model-index:
- name: MicroCoderFIM-1B
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 65.46
verified: false
- name: pass@10
type: pass@10
value: 90.36
verified: false
- name: pass@100
type: pass@100
value: 94.43
verified: false
- task:
type: text-generation
dataset:
type: xvadov01/cpp_emb_nl2pl
name: xvadov01/cpp_emb_nl2pl
metrics:
- name: BLEU
type: bleu
value: 31.74
verified: false
- name: codeBLEU
type: codeBLEU
value: 40.53
verified: false
- name: chrf++
type: chrf
value: 51.54
verified: false
- name: rouge-l
type: rouge
value: 43.31
verified: false
---
# Model Card for Model ID
This is a finetuned version of StarCoderBase 1B using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on [dataset](https://huggingface.co/datasets/xvadov01/cpp_emb_nl2pl) focused on embedded systems programming.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** Transformer decoder architecture with Multi-Query attention
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model [optional]:** [StarCoderBase 1B](https://huggingface.co/bigcode/starcoderbase-1b)
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** NVIDIA GeForce RTX 3090
- **Hours used:** 5h 25m
- **Carbon Emitted:** 0.83
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wanliyu1987/q-FrozenLake-v1-4x4-noSlippery | wanliyu1987 | "2023-08-21T12:45:14Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-08-21T12:45:12Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="wanliyu1987/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
PQlet/lora-narutoblip-v1-ablation-r16-a16-module_to_k_to_v | PQlet | "2024-05-18T17:32:00Z" | 2 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2024-05-18T17:31:55Z" | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
- lora
base_model: runwayml/stable-diffusion-v1-5
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - PQlet/lora-narutoblip-v1-ablation-r16-a16-module_to_k_to_v
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the Naruto-BLIP dataset. You can find some example images in the following.







## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
hyper-accel/tiny-random-gemma | hyper-accel | "2025-02-10T05:58:30Z" | 113 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-10T05:57:20Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
cansurav/bert-base-uncased-finetuned-cola-learning_rate-8e-06 | cansurav | "2023-05-05T10:02:23Z" | 107 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-05-05T09:48:00Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola-learning_rate-8e-06
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5752615459764325
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-learning_rate-8e-06
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8389
- Matthews Correlation: 0.5753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5241 | 1.0 | 535 | 0.4659 | 0.5046 |
| 0.3755 | 2.0 | 1070 | 0.4412 | 0.5650 |
| 0.2782 | 3.0 | 1605 | 0.5524 | 0.5395 |
| 0.2154 | 4.0 | 2140 | 0.6437 | 0.5651 |
| 0.1669 | 5.0 | 2675 | 0.7709 | 0.5650 |
| 0.1503 | 6.0 | 3210 | 0.8389 | 0.5753 |
| 0.1151 | 7.0 | 3745 | 0.8964 | 0.5681 |
| 0.1082 | 8.0 | 4280 | 0.9767 | 0.5548 |
| 0.0816 | 9.0 | 4815 | 0.9978 | 0.5498 |
| 0.0809 | 10.0 | 5350 | 1.0170 | 0.5576 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
qgallouedec/trpo-Humanoid-v3-4106392303 | qgallouedec | "2024-04-09T14:55:10Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"Humanoid-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"Humanoid-v4",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-28T15:12:30Z" | ---
library_name: stable-baselines3
tags:
- Humanoid-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
- Humanoid-v4
model-index:
- name: TRPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Humanoid-v3
type: Humanoid-v3
metrics:
- type: mean_reward
value: 3706.29 +/- 1857.04
name: mean_reward
verified: false
---
# **TRPO** Agent playing **Humanoid-v3**
This is a trained model of a **TRPO** agent playing **Humanoid-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo trpo --env Humanoid-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo trpo --env Humanoid-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo trpo --env Humanoid-v3 -orga qgallouedec -f logs/
python -m rl_zoo3.enjoy --algo trpo --env Humanoid-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo trpo --env Humanoid-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo trpo --env Humanoid-v3 -f logs/ -orga qgallouedec
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('cg_damping', 0.1),
('cg_max_steps', 25),
('gae_lambda', 0.95),
('gamma', 0.99),
('learning_rate', 0.001),
('n_critic_updates', 20),
('n_envs', 2),
('n_steps', 1024),
('n_timesteps', 2000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('sub_sampling_factor', 1),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
|
riccardogiorato/avatar-diffusion | riccardogiorato | "2023-05-16T09:25:31Z" | 51 | 11 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"Avatar",
"Avatar The Way of Water",
"film",
"James Cameron",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2022-11-06T09:44:10Z" | ---
language:
- en
tags:
- stable-diffusion
- stable-diffusion-diffusers
- Avatar
- Avatar The Way of Water
- film
- James Cameron
license: creativeml-openrail-m
---
<center><img src="https://huggingface.co/riccardogiorato/avatar-diffusion/resolve/main/assets/avatartwow.png" width="512" height="512"/></center>

# Avatar Diffusion
An AI model that generates artwork with Avatar style!
Based of a finetuned Stable Diffusion V1.5, trained in Dreambooth with more than 50 images from the latest trailer Avatar: The Way of Water.
By [riccardogiorato](https://twitter.com/riccardogiorato)
> **Note**: To get the Avatar styles, use the **avatartwow style** keyword in your prompt.
>
> **Don't use** the **avatar** keyword, because it's already used by the original model but full of messy data.
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX]().
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "riccardogiorato/avatar-diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "a magical witch with blue hair with avatartwow style"
image = pipe(prompt).images[0]
image.save("./magical_witch.png")
```
# **👇Model👇**
AI Model Weights available at huggingface: https://huggingface.co/riccardogiorato/avatar-diffusion
# Usage
After model loaded, use keyword **avatartwow** in your prompt or even better **avatartwow style**.
## License
This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
The CreativeML OpenRAIL License specifies:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
[Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)
|
trenden/af171d77-3106-4f8c-aa27-6f085a8d810c | trenden | "2025-01-22T08:49:25Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:adapter:microsoft/Phi-3-mini-128k-instruct",
"license:mit",
"region:us"
] | null | "2025-01-22T06:50:44Z" | ---
library_name: peft
license: mit
base_model: microsoft/Phi-3-mini-128k-instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: af171d77-3106-4f8c-aa27-6f085a8d810c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/Phi-3-mini-128k-instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ce7b521ea53ac3c8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ce7b521ea53ac3c8_train_data.json
type:
field_input: article
field_instruction: question
field_output: content
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: trenden/af171d77-3106-4f8c-aa27-6f085a8d810c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/ce7b521ea53ac3c8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 9f3b8ddd-4c77-445b-99ba-0b9ebff1d8b1
wandb_project: Birthday-SN56-3-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 9f3b8ddd-4c77-445b-99ba-0b9ebff1d8b1
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# af171d77-3106-4f8c-aa27-6f085a8d810c
This model is a fine-tuned version of [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3923
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 5.5514 | 0.0000 | 1 | 1.4503 |
| 4.9402 | 0.0000 | 3 | 1.4487 |
| 6.3914 | 0.0001 | 6 | 1.4354 |
| 6.0996 | 0.0001 | 9 | 1.3923 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Jxctcv/Assala | Jxctcv | "2023-08-20T15:56:35Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-08-20T15:56:35Z" | ---
license: creativeml-openrail-m
---
|
krecceg/ppo-Huggy | krecceg | "2023-02-01T16:58:26Z" | 23 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2023-02-01T16:58:19Z" |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: krecceg/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
RayneAmes/vileplume_v2 | RayneAmes | "2025-02-13T06:03:46Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-13T05:58:37Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
AppyFizz/calrealxl-woman | AppyFizz | "2024-10-20T17:55:12Z" | 30 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2024-10-20T17:51:00Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
---
### calrealxl woman on Stable Diffusion via Dreambooth
#### model by AppyFizz
This your the Stable Diffusion model fine-tuned the calrealxl woman concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **calrealxl woman**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
|
LHRuig/northcuttsx | LHRuig | "2025-03-25T20:47:21Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-03-25T20:47:00Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: northcuttsx
---
# northcuttsx
<Gallery />
## Model description
northcuttsx lora
## Trigger words
You should use `northcuttsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/northcuttsx/tree/main) them in the Files & versions tab.
|
lesso11/473c4b04-5ed3-4238-8830-73ecb04bf77e | lesso11 | "2025-04-09T21:11:04Z" | 8 | 0 | null | [
"safetensors",
"qwen2",
"region:us"
] | null | "2025-04-01T09:39:13Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
MinaMila/llama_instbase_GermanCredit_cfda_10ep_22 | MinaMila | "2025-03-27T19:47:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct",
"base_model:finetune:unsloth/Meta-Llama-3.1-8B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-27T19:43:42Z" | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MinaMila
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Mantis-VL/mantis-8b-idefics2-classification-example_4096_regression | Mantis-VL | "2024-06-30T21:35:41Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"idefics2",
"text-classification",
"generated_from_trainer",
"base_model:HuggingFaceM4/idefics2-8b",
"base_model:finetune:HuggingFaceM4/idefics2-8b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-28T07:55:38Z" | ---
license: apache-2.0
base_model: HuggingFaceM4/idefics2-8b
tags:
- generated_from_trainer
model-index:
- name: mantis-8b-idefics2-classification-example_4096_regression
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mantis-8b-idefics2-classification-example_4096_regression
This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
stabgan/gemma-3-finetuned-medical-v1 | stabgan | "2025-04-10T16:44:51Z" | 0 | 1 | transformers | [
"transformers",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-10T16:13:33Z" | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** stabgan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RichardErkhov/choeyunbeom_-_llama3_KM-gguf | RichardErkhov | "2025-03-26T18:01:15Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-26T16:56:42Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama3_KM - GGUF
- Model creator: https://huggingface.co/choeyunbeom/
- Original model: https://huggingface.co/choeyunbeom/llama3_KM/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama3_KM.Q2_K.gguf](https://huggingface.co/RichardErkhov/choeyunbeom_-_llama3_KM-gguf/blob/main/llama3_KM.Q2_K.gguf) | Q2_K | 1.27GB |
| [llama3_KM.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/choeyunbeom_-_llama3_KM-gguf/blob/main/llama3_KM.IQ3_XS.gguf) | IQ3_XS | 1.38GB |
| [llama3_KM.IQ3_S.gguf](https://huggingface.co/RichardErkhov/choeyunbeom_-_llama3_KM-gguf/blob/main/llama3_KM.IQ3_S.gguf) | IQ3_S | 1.44GB |
| [llama3_KM.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/choeyunbeom_-_llama3_KM-gguf/blob/main/llama3_KM.Q3_K_S.gguf) | Q3_K_S | 1.44GB |
| [llama3_KM.IQ3_M.gguf](https://huggingface.co/RichardErkhov/choeyunbeom_-_llama3_KM-gguf/blob/main/llama3_KM.IQ3_M.gguf) | IQ3_M | 1.49GB |
| [llama3_KM.Q3_K.gguf](https://huggingface.co/RichardErkhov/choeyunbeom_-_llama3_KM-gguf/blob/main/llama3_KM.Q3_K.gguf) | Q3_K | 1.57GB |
| [llama3_KM.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/choeyunbeom_-_llama3_KM-gguf/blob/main/llama3_KM.Q3_K_M.gguf) | Q3_K_M | 1.57GB |
| [llama3_KM.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/choeyunbeom_-_llama3_KM-gguf/blob/main/llama3_KM.Q3_K_L.gguf) | Q3_K_L | 1.69GB |
| [llama3_KM.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/choeyunbeom_-_llama3_KM-gguf/blob/main/llama3_KM.IQ4_XS.gguf) | IQ4_XS | 1.71GB |
| [llama3_KM.Q4_0.gguf](https://huggingface.co/RichardErkhov/choeyunbeom_-_llama3_KM-gguf/blob/main/llama3_KM.Q4_0.gguf) | Q4_0 | 1.79GB |
| [llama3_KM.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/choeyunbeom_-_llama3_KM-gguf/blob/main/llama3_KM.IQ4_NL.gguf) | IQ4_NL | 1.79GB |
| [llama3_KM.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/choeyunbeom_-_llama3_KM-gguf/blob/main/llama3_KM.Q4_K_S.gguf) | Q4_K_S | 1.8GB |
| [llama3_KM.Q4_K.gguf](https://huggingface.co/RichardErkhov/choeyunbeom_-_llama3_KM-gguf/blob/main/llama3_KM.Q4_K.gguf) | Q4_K | 1.88GB |
| [llama3_KM.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/choeyunbeom_-_llama3_KM-gguf/blob/main/llama3_KM.Q4_K_M.gguf) | Q4_K_M | 1.88GB |
| [llama3_KM.Q4_1.gguf](https://huggingface.co/RichardErkhov/choeyunbeom_-_llama3_KM-gguf/blob/main/llama3_KM.Q4_1.gguf) | Q4_1 | 1.95GB |
| [llama3_KM.Q5_0.gguf](https://huggingface.co/RichardErkhov/choeyunbeom_-_llama3_KM-gguf/blob/main/llama3_KM.Q5_0.gguf) | Q5_0 | 2.11GB |
| [llama3_KM.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/choeyunbeom_-_llama3_KM-gguf/blob/main/llama3_KM.Q5_K_S.gguf) | Q5_K_S | 2.11GB |
| [llama3_KM.Q5_K.gguf](https://huggingface.co/RichardErkhov/choeyunbeom_-_llama3_KM-gguf/blob/main/llama3_KM.Q5_K.gguf) | Q5_K | 2.16GB |
| [llama3_KM.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/choeyunbeom_-_llama3_KM-gguf/blob/main/llama3_KM.Q5_K_M.gguf) | Q5_K_M | 2.16GB |
| [llama3_KM.Q5_1.gguf](https://huggingface.co/RichardErkhov/choeyunbeom_-_llama3_KM-gguf/blob/main/llama3_KM.Q5_1.gguf) | Q5_1 | 2.28GB |
| [llama3_KM.Q6_K.gguf](https://huggingface.co/RichardErkhov/choeyunbeom_-_llama3_KM-gguf/blob/main/llama3_KM.Q6_K.gguf) | Q6_K | 2.46GB |
| [llama3_KM.Q8_0.gguf](https://huggingface.co/RichardErkhov/choeyunbeom_-_llama3_KM-gguf/blob/main/llama3_KM.Q8_0.gguf) | Q8_0 | 3.19GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HPLT/hplt_bert_base_2_0_hin-Deva | HPLT | "2025-03-19T12:45:36Z" | 9 | 0 | null | [
"pytorch",
"BERT",
"HPLT",
"encoder",
"custom_code",
"hi",
"dataset:HPLT/HPLT2.0_cleaned",
"arxiv:2503.10267",
"license:apache-2.0",
"region:us"
] | null | "2025-02-22T22:49:53Z" | ---
language:
- hi
inference: false
tags:
- BERT
- HPLT
- encoder
license: apache-2.0
datasets:
- HPLT/HPLT2.0_cleaned
---
# HPLT v2.0 BERT for Hindi
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a second release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
We present monolingual LTG-BERT models for more than 50 languages out of 191 total in the [HPLT v2.0 dataset](https://hplt-project.org/datasets/v2.0).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage (tested with `transformers==4.46.1` and `tokenizers==0.20.1`)
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_2_0_hin-Deva")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_2_0_hin-Deva", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist(), clean_up_tokenization_spaces=True))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Intermediate checkpoints
We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`.
You can load a specific model revision with `transformers` using the argument `revision`:
```python
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_2_0_hin-Deva", revision="step21875", trust_remote_code=True)
```
You can access all the revisions for the models with the following code:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("HPLT/hplt_bert_base_2_0_hin-Deva")
print([b.name for b in out.branches])
```
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-trained,
title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
author = "Samuel, David and
Kutuzov, Andrey and
{\O}vrelid, Lilja and
Velldal, Erik",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.146",
doi = "10.18653/v1/2023.findings-eacl.146",
pages = "1954--1974"
}
```
```bibtex
@misc{burchell2025expandedmassivemultilingualdataset,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
year={2025},
eprint={2503.10267},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2503.10267},
}
```
|
MinhViet/bartpho-linear2 | MinhViet | "2024-05-30T18:18:17Z" | 39 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-05-30T18:17:04Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
wufeim/Qwen2.5-VL-7B-Instruct-SFT-OpenImages_3DSR_mar16_filtered1200_nothinking-2025-03-29-20-59-47 | wufeim | "2025-03-30T01:54:10Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:ccvl/OpenImages_3DSR_mar16_filtered1200_nothinking",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-03-30T01:00:20Z" | ---
base_model: Qwen/Qwen2.5-VL-7B-Instruct
datasets: ccvl/OpenImages_3DSR_mar16_filtered1200_nothinking
library_name: transformers
model_name: Qwen/Qwen2.5-VL-7B-Instruct
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen/Qwen2.5-VL-7B-Instruct
This model is a fine-tuned version of [Qwen/Qwen2.5-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct) on the [ccvl/OpenImages_3DSR_mar16_filtered1200_nothinking](https://huggingface.co/datasets/ccvl/OpenImages_3DSR_mar16_filtered1200_nothinking) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="wufeim/Qwen2.5-VL-7B-Instruct-SFT-OpenImages_3DSR_mar16_filtered1200_nothinking-2025-03-29-20-59-47", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/wma27/spatial-reasoning-r1/runs/gfrg85bs)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0.dev0
- Pytorch: 2.5.1
- Datasets: 2.16.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
manche/gpt2-safeguard-zs | manche | "2024-02-07T16:14:17Z" | 89 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-07T16:13:18Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aldjia/q-FrozenLake-v1-4x4-noSlippery | aldjia | "2024-04-27T13:53:03Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2024-04-27T13:48:53Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="aldjia/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
JacksonBrune/b5dd3107-5f7d-403c-b405-baf150631a9b | JacksonBrune | "2025-02-15T00:20:12Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:fxmarty/tiny-dummy-qwen2",
"base_model:adapter:fxmarty/tiny-dummy-qwen2",
"license:mit",
"region:us"
] | null | "2025-02-15T00:15:37Z" | ---
library_name: peft
license: mit
base_model: fxmarty/tiny-dummy-qwen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: b5dd3107-5f7d-403c-b405-baf150631a9b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# b5dd3107-5f7d-403c-b405-baf150631a9b
This model is a fine-tuned version of [fxmarty/tiny-dummy-qwen2](https://huggingface.co/fxmarty/tiny-dummy-qwen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.8691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
huantian2415/vicuna-13b-chinese-4bit-ggml | huantian2415 | "2023-04-28T03:50:44Z" | 0 | 11 | null | [
"region:us"
] | null | "2023-04-27T02:05:33Z" | # Vicuna 13B V1.1 Chinese 4bit ggml format
This model was obtained from following repo:
* uukuguy/vicuna-13b-v1.1
* ziqingyang/chinese-alpaca-lora-13b
Merged using sciprts from: https://github.com/ymcui/Chinese-LLaMA-Alpaca
**License:**
Apache License 2.0i
Result

|
narpas/Deviant-EXPERIMENTAL-V2-70B-6.0bpw-h8-exl2 | narpas | "2025-03-16T22:28:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:TareksLab/Deviant-EXPERIMENTAL-V2-70B",
"base_model:quantized:TareksLab/Deviant-EXPERIMENTAL-V2-70B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] | text-generation | "2025-03-16T20:05:27Z" | ---
base_model:
- TareksLab/Deviant-EXPERIMENTAL-V2-70B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [allura-org/Bigger-Body-70b](https://huggingface.co/allura-org/Bigger-Body-70b) as a base.
### Models Merged
The following models were included in the merge:
* [Sao10K/L3-70B-Euryale-v2.1](https://huggingface.co/Sao10K/L3-70B-Euryale-v2.1)
* [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B)
* [ReadyArt/Forgotten-Safeword-70B-3.6](https://huggingface.co/ReadyArt/Forgotten-Safeword-70B-3.6)
* [TheDrummer/Fallen-Llama-3.3-R1-70B-v1](https://huggingface.co/TheDrummer/Fallen-Llama-3.3-R1-70B-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1
- model: ReadyArt/Forgotten-Safeword-70B-3.6
- model: SicariusSicariiStuff/Negative_LLAMA_70B
- model: Sao10K/L3-70B-Euryale-v2.1
merge_method: sce
base_model: allura-org/Bigger-Body-70b
parameters:
select_topk: 0.75
int8_mask: true
chat_template: llama3
tokenizer:
source: base
dtype: bfloat16
```
|
LHRuig/pepe3 | LHRuig | "2025-02-02T20:02:18Z" | 8 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-02-02T20:01:52Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: pepe3
---
# pepe3
<Gallery />
## Model description
pepe3 lora
## Trigger words
You should use `pepe3` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/pepe3/tree/main) them in the Files & versions tab.
|
wu981526092/MK4 | wu981526092 | "2024-09-09T14:59:38Z" | 37 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-09T14:59:01Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zelk12/MT-Gen6fix-W-gemma-2-ItATv3-9B | zelk12 | "2025-02-02T10:10:41Z" | 12 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:IlyaGusev/gemma-2-9b-it-abliterated",
"base_model:merge:IlyaGusev/gemma-2-9b-it-abliterated",
"base_model:TheDrummer/Tiger-Gemma-9B-v3",
"base_model:merge:TheDrummer/Tiger-Gemma-9B-v3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-02T10:04:38Z" | ---
base_model:
- TheDrummer/Tiger-Gemma-9B-v3
- IlyaGusev/gemma-2-9b-it-abliterated
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [TheDrummer/Tiger-Gemma-9B-v3](https://huggingface.co/TheDrummer/Tiger-Gemma-9B-v3)
* [IlyaGusev/gemma-2-9b-it-abliterated](https://huggingface.co/IlyaGusev/gemma-2-9b-it-abliterated)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: IlyaGusev/gemma-2-9b-it-abliterated
- model: TheDrummer/Tiger-Gemma-9B-v3
merge_method: slerp
base_model: IlyaGusev/gemma-2-9b-it-abliterated
dtype: bfloat16
parameters:
t: 0.25
```
|
Nerva1228/daiyu01 | Nerva1228 | "2024-12-25T09:42:52Z" | 8 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-12-23T03:01:24Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: daiyu
---
# Daiyu01
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `daiyu` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Nerva1228/daiyu01', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
RenauxLouis/merged-monet-mitchell-10000steps-688 | RenauxLouis | "2023-05-22T10:45:25Z" | 3 | 1 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2023-05-20T19:25:57Z" |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - RenauxLouis/merged-monet-mitchell-8000steps-688
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the merged-monet-mitchell-dataset dataset. You can find some example images in the following.




|
ardaspear/7c03aea4-2a29-4740-aa9e-d4f17adf6239 | ardaspear | "2025-01-11T11:01:45Z" | 11 | 0 | peft | [
"peft",
"safetensors",
"opt",
"axolotl",
"generated_from_trainer",
"base_model:facebook/opt-125m",
"base_model:adapter:facebook/opt-125m",
"license:other",
"region:us"
] | null | "2025-01-11T11:00:25Z" | ---
library_name: peft
license: other
base_model: facebook/opt-125m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7c03aea4-2a29-4740-aa9e-d4f17adf6239
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: facebook/opt-125m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8d80dfcbb444ae04_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8d80dfcbb444ae04_train_data.json
type:
field_instruction: text
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: ardaspear/7c03aea4-2a29-4740-aa9e-d4f17adf6239
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 128
lora_dropout: 0.1
lora_fan_in_fan_out: true
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 8
mlflow_experiment_name: /tmp/8d80dfcbb444ae04_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: false
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: leixa-personal
wandb_mode: online
wandb_name: 802638da-fd54-4d25-ad5b-57beacbf31a3
wandb_project: Gradients-On-Five
wandb_run: your_name
wandb_runid: 802638da-fd54-4d25-ad5b-57beacbf31a3
warmup_steps: 10
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 7c03aea4-2a29-4740-aa9e-d4f17adf6239
This model is a fine-tuned version of [facebook/opt-125m](https://huggingface.co/facebook/opt-125m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0098 | 1 | 2.3276 |
| 9.8132 | 0.0490 | 5 | 2.2486 |
| 9.2482 | 0.0980 | 10 | 2.0860 |
| 8.1697 | 0.1471 | 15 | 2.0146 |
| 8.6968 | 0.1961 | 20 | 1.9683 |
| 8.0741 | 0.2451 | 25 | 1.9344 |
| 7.9488 | 0.2941 | 30 | 1.9185 |
| 8.7765 | 0.3431 | 35 | 1.8965 |
| 8.1578 | 0.3922 | 40 | 1.8888 |
| 7.9261 | 0.4412 | 45 | 1.8863 |
| 8.2143 | 0.4902 | 50 | 1.8856 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
FounderOfHuggingface/gpt2_lora_r8_dbpedia_14_t300_e5_non_member_shadow4 | FounderOfHuggingface | "2023-12-04T15:27:21Z" | 2 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2023-12-04T15:27:19Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
### Framework versions
- PEFT 0.6.2
|
Clawoo/ppo-LunarLander-v2u1 | Clawoo | "2023-02-15T18:33:04Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-15T18:32:38Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.09 +/- 20.55
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
malper/unikud | malper | "2022-04-25T02:11:25Z" | 17,783 | 4 | transformers | [
"transformers",
"pytorch",
"canine",
"he",
"endpoints_compatible",
"region:us"
] | null | "2022-04-18T15:56:16Z" | ---
language:
- he
---
Please see [this model's DagsHub repository](https://dagshub.com/morrisalp/unikud) for information on usage. |
abenius/0a8f8bd8-2a70-4069-a8de-03f19149870d | abenius | "2025-02-05T15:35:35Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-3B",
"base_model:adapter:unsloth/Qwen2.5-3B",
"license:other",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-05T14:56:41Z" | ---
library_name: peft
license: other
base_model: unsloth/Qwen2.5-3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0a8f8bd8-2a70-4069-a8de-03f19149870d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8269aa2f21864766_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8269aa2f21864766_train_data.json
type:
field_input: prompt
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: abenius/0a8f8bd8-2a70-4069-a8de-03f19149870d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 500
micro_batch_size: 2
mlflow_experiment_name: /tmp/8269aa2f21864766_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: a3da30a6-f598-4d7b-afdf-be7e19d12470
wandb_project: Gradients-On-12
wandb_run: your_name
wandb_runid: a3da30a6-f598-4d7b-afdf-be7e19d12470
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 0a8f8bd8-2a70-4069-a8de-03f19149870d
This model is a fine-tuned version of [unsloth/Qwen2.5-3B](https://huggingface.co/unsloth/Qwen2.5-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3253 | 0.4901 | 500 | 0.3043 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
bharati2324/Llama-1B-Code-LoRA-r8-merged | bharati2324 | "2024-11-13T02:24:47Z" | 55 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-13T02:23:27Z" | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CMU-AIR2/math-phi-1-5-FULL-Arithmetic-4k-MWP-2k | CMU-AIR2 | "2024-05-24T23:56:51Z" | 150 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-24T23:53:54Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jiayihao03/gemma2b_code_python | jiayihao03 | "2024-03-05T01:38:49Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-03-05T01:38:34Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-2b-bnb-4bit
---
# Uploaded model
- **Developed by:** jiayihao03
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
jasonkrone/olmo_1b_toks_75b | jasonkrone | "2024-08-19T01:48:12Z" | 322 | 0 | transformers | [
"transformers",
"safetensors",
"hf_olmo",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-08-19T01:46:15Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sail-rvc/aresgun | sail-rvc | "2023-07-14T07:35:10Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:34:45Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# aresgun
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:35:10
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
hedronstone/whisper-large-v2-sw | hedronstone | "2022-12-20T13:53:16Z" | 75 | 1 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"sw",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-12-09T14:47:18Z" | ---
language:
- sw
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: whisper-large-v2-sw
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: sw
split: test
args: 'config: sw, split: test'
metrics:
- name: Wer
type: wer
value: 30.7
---
## Model
* Name: Whisper Large-v2 Swahili
* Description: Whisper weights for speech-to-text task, fine-tuned and evaluated on normalized data.
* Dataset:
- Train and validation splits for Swahili subsets of [Common Voice 11.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0).
- Train, validation and test splits for Swahili subsets of [Google Fleurs](https://huggingface.co/datasets/google/fleurs/).
* Performance: **30.7 WER**
## Weights
* Date of release: 12.09.2022
* License: MIT
## Usage
To use these weights in HuggingFace's `transformers` library, you can do the following:
```python
from transformers import WhisperForConditionalGeneration
model = WhisperForConditionalGeneration.from_pretrained("hedronstone/whisper-large-v2-sw")
```
|
ValiantLabs/Llama3.1-8B-Esper2 | ValiantLabs | "2025-03-12T00:31:27Z" | 44 | 2 | null | [
"safetensors",
"llama",
"esper",
"esper-2",
"valiant",
"valiant-labs",
"llama-3.1",
"llama-3.1-instruct",
"llama-3.1-instruct-8b",
"llama-3",
"llama-3-instruct",
"llama-3-instruct-8b",
"8b",
"code",
"code-instruct",
"python",
"dev-ops",
"terraform",
"azure",
"aws",
"gcp",
"architect",
"engineer",
"developer",
"conversational",
"chat",
"instruct",
"text-generation",
"en",
"dataset:sequelbox/Titanium",
"dataset:sequelbox/Tachibana",
"dataset:sequelbox/Supernova",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"model-index",
"region:us"
] | text-generation | "2024-10-02T14:36:46Z" | ---
language:
- en
pipeline_tag: text-generation
tags:
- esper
- esper-2
- valiant
- valiant-labs
- llama
- llama-3.1
- llama-3.1-instruct
- llama-3.1-instruct-8b
- llama-3
- llama-3-instruct
- llama-3-instruct-8b
- 8b
- code
- code-instruct
- python
- dev-ops
- terraform
- azure
- aws
- gcp
- architect
- engineer
- developer
- conversational
- chat
- instruct
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
datasets:
- sequelbox/Titanium
- sequelbox/Tachibana
- sequelbox/Supernova
model_type: llama
model-index:
- name: ValiantLabs/Llama3.1-8B-Esper2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-Shot)
type: Winogrande
args:
num_few_shot: 5
metrics:
- type: acc
value: 75.85
name: acc
license: llama3.1
---
**[ESPER 3 COMING SOON! Click here to support our open-source dataset and model releases!](https://huggingface.co/spaces/sequelbox/SupportOpenSource)**

Esper 2 is a DevOps and cloud architecture code specialist built on Llama 3.1 8b.
- Expertise-driven, an AI assistant focused on AWS, Azure, GCP, Terraform, Dockerfiles, pipelines, shell scripts and more!
- Real world problem solving and high quality code instruct performance within the Llama 3.1 Instruct chat format
- Finetuned on synthetic [DevOps-instruct](https://huggingface.co/datasets/sequelbox/Titanium) and [code-instruct](https://huggingface.co/datasets/sequelbox/Tachibana) data generated with Llama 3.1 405b.
- Overall chat performance supplemented with [generalist chat data.](https://huggingface.co/datasets/sequelbox/Supernova)
Try our code-instruct AI assistant [Enigma!](https://huggingface.co/ValiantLabs/Llama3.1-8B-Enigma)
## Version
This is the **2024-10-02** release of Esper 2 for Llama 3.1 8b.
Esper 2 is now available for [Llama 3.2 3b!](https://huggingface.co/ValiantLabs/Llama3.2-3B-Esper2)
Esper 2 will be coming to more model sizes soon :)
## Prompting Guide
Esper 2 uses the [Llama 3.1 Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) prompt format. The example script below can be used as a starting point for general chat:
```python
import transformers
import torch
model_id = "ValiantLabs/Llama3.1-8B-Esper2"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are an AI assistant."},
{"role": "user", "content": "Hi, how do I optimize the size of a Docker image?"}
]
outputs = pipeline(
messages,
max_new_tokens=2048,
)
print(outputs[0]["generated_text"][-1])
```
## The Model
Esper 2 is built on top of Llama 3.1 8b Instruct, improving performance through high quality DevOps, code, and chat data in Llama 3.1 Instruct prompt style.
Our current version of Esper 2 is trained on DevOps data from [sequelbox/Titanium](https://huggingface.co/datasets/sequelbox/Titanium), supplemented by code-instruct data from [sequelbox/Tachibana](https://huggingface.co/datasets/sequelbox/Tachibana) and general chat data from [sequelbox/Supernova.](https://huggingface.co/datasets/sequelbox/Supernova)

Esper 2 is created by [Valiant Labs.](http://valiantlabs.ca/)
[Check out our HuggingFace page for Shining Valiant 2 Enigma, and our other Build Tools models for creators!](https://huggingface.co/ValiantLabs)
We care about open source.
For everyone to use.
We encourage others to finetune further from our models. |
protectai/deberta-v3-large-zeroshot-v1-onnx | protectai | "2024-04-11T12:11:53Z" | 9 | 1 | transformers | [
"transformers",
"onnx",
"deberta-v2",
"text-classification",
"NLI",
"deberta-v3",
"zero-shot-classification",
"en",
"dataset:mnli",
"dataset:facebook/anli",
"dataset:fever",
"dataset:wanli",
"dataset:ling",
"dataset:amazonpolarity",
"dataset:imdb",
"dataset:appreviews",
"base_model:MoritzLaurer/deberta-v3-large-zeroshot-v1",
"base_model:quantized:MoritzLaurer/deberta-v3-large-zeroshot-v1",
"license:mit",
"autotrain_compatible",
"region:us"
] | zero-shot-classification | "2023-11-12T21:29:50Z" | ---
language:
- en
license: mit
tags:
- NLI
- deberta-v3
datasets:
- mnli
- facebook/anli
- fever
- wanli
- ling
- amazonpolarity
- imdb
- appreviews
inference: false
pipeline_tag: zero-shot-classification
base_model: MoritzLaurer/deberta-v3-large-zeroshot-v1
---
# ONNX version of MoritzLaurer/deberta-v3-large-zeroshot-v1
**This model is a conversion of [MoritzLaurer/deberta-v3-large-zeroshot-v1](https://huggingface.co/MoritzLaurer/deberta-v3-large-zeroshot-v1) to ONNX** format using the [🤗 Optimum](https://huggingface.co/docs/optimum/index) library.
`MoritzLaurer/deberta-v3-large-zeroshot-v1` is designed for zero-shot classification, capable of determining whether a hypothesis is `true` or `not_true` based on a text, a format based on Natural Language Inference (NLI).
## Usage
Loading the model requires the [🤗 Optimum](https://huggingface.co/docs/optimum/index) library installed.
```python
from optimum.onnxruntime import ORTModelForSequenceClassification
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("laiyer/deberta-v3-large-zeroshot-v1-onnx")
tokenizer.model_input_names = ["input_ids", "attention_mask"]
model = ORTModelForSequenceClassification.from_pretrained("laiyer/deberta-v3-large-zeroshot-v1-onnx")
classifier = pipeline(
task="zero-shot-classification",
model=model,
tokenizer=tokenizer,
)
classifier_output = classifier("Last week I upgraded my iOS version and ever since then my phone has been overheating whenever I use your app.", ["mobile", "website", "billing", "account access"])
print(classifier_output)
```
### LLM Guard
[Ban Topics scanner](https://llm-guard.com/input_scanners/ban_topics/)
## Community
Join our Slack to give us feedback, connect with the maintainers and fellow users, ask questions,
or engage in discussions about LLM security!
<a href="https://join.slack.com/t/laiyerai/shared_invite/zt-28jv3ci39-sVxXrLs3rQdaN3mIl9IT~w"><img src="https://github.com/laiyer-ai/llm-guard/blob/main/docs/assets/join-our-slack-community.png?raw=true" width="200"></a>
|
zhanjun/lora-trained-xl-notion_trans | zhanjun | "2024-05-11T05:00:56Z" | 1 | 1 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2024-05-11T03:27:00Z" | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a notion style picture of a person
widget:
- text: a notion style cartoon man's face with a black and white outline
output:
url: image_0.png
- text: a notion style cartoon man's face with a black and white outline
output:
url: image_1.png
- text: a notion style cartoon man's face with a black and white outline
output:
url: image_2.png
- text: a notion style cartoon man's face with a black and white outline
output:
url: image_3.png
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - zhanjun/lora-trained-xl-notion_trans
<Gallery />
## Model description
These are zhanjun/lora-trained-xl-notion_trans LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a notion style picture of a person to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](zhanjun/lora-trained-xl-notion_trans/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
nkpz/OpenThinker-7B-Uncensored-DeLMAT | nkpz | "2025-02-20T23:26:31Z" | 0 | 0 | null | [
"safetensors",
"qwen2",
"base_model:open-thoughts/OpenThinker-7B",
"base_model:finetune:open-thoughts/OpenThinker-7B",
"license:apache-2.0",
"region:us"
] | null | "2025-02-20T22:07:40Z" | ---
license: apache-2.0
base_model:
- open-thoughts/OpenThinker-7B
---
Decensored using a custom training script guided by activations, similar to ablation/"abliteration" scripts but not exactly the same approach.
The training script is released under the MIT license: https://github.com/nkpz/DeLMAT |
baesad/Llama3.2-BLChat-3B | baesad | "2025-02-02T06:10:11Z" | 17 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-31T15:17:16Z" | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** baesad
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
mradermacher/CosmoQwen2.4-i1-GGUF | mradermacher | "2025-03-25T13:25:55Z" | 15 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:jetuned/CosmoQwen2.4",
"base_model:quantized:jetuned/CosmoQwen2.4",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-03-25T02:04:14Z" | ---
base_model: jetuned/CosmoQwen2.4
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jetuned/CosmoQwen2.4
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/CosmoQwen2.4-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | |
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | |
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | |
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | |
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/CosmoQwen2.4-i1-GGUF/resolve/main/CosmoQwen2.4.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
SamuelM0422/detr-resnet-50-hardhat-finetuned | SamuelM0422 | "2025-03-04T14:21:44Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"detr",
"object-detection",
"generated_from_trainer",
"dataset:anindya64/hardhat",
"base_model:facebook/detr-resnet-50-dc5",
"base_model:finetune:facebook/detr-resnet-50-dc5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | object-detection | "2025-03-04T12:50:14Z" | ---
library_name: transformers
license: apache-2.0
base_model: facebook/detr-resnet-50-dc5
tags:
- generated_from_trainer
datasets:
- anindya64/hardhat
model-index:
- name: DETR Resnet 50 - Helmet Detection
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DETR Resnet 50 - Helmet Detection
This model is a fine-tuned version of [facebook/detr-resnet-50-dc5](https://huggingface.co/facebook/detr-resnet-50-dc5) on the Hard Hat dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.49.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.2
- Tokenizers 0.21.0
|
ontocord/ontocord_wide_7b-stacked-stage1-instruct | ontocord | "2025-02-27T19:56:40Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-27T19:30:15Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Mermaid-Llama-22B-RAG-GGUF | mradermacher | "2024-05-14T19:07:46Z" | 6 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:TroyDoesAI/Mermaid-Llama-22B-RAG",
"base_model:quantized:TroyDoesAI/Mermaid-Llama-22B-RAG",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-05-14T15:30:34Z" | ---
base_model: TroyDoesAI/Mermaid-Llama-22B-RAG
language:
- en
library_name: transformers
license: cc-by-4.0
no_imatrix: nan
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
static quants of https://huggingface.co/TroyDoesAI/Mermaid-Llama-22B-RAG
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-22B-RAG-GGUF/resolve/main/Mermaid-Llama-22B-RAG.Q2_K.gguf) | Q2_K | 8.2 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-22B-RAG-GGUF/resolve/main/Mermaid-Llama-22B-RAG.IQ3_XS.gguf) | IQ3_XS | 9.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-22B-RAG-GGUF/resolve/main/Mermaid-Llama-22B-RAG.IQ3_S.gguf) | IQ3_S | 9.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-22B-RAG-GGUF/resolve/main/Mermaid-Llama-22B-RAG.Q3_K_S.gguf) | Q3_K_S | 9.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-22B-RAG-GGUF/resolve/main/Mermaid-Llama-22B-RAG.IQ3_M.gguf) | IQ3_M | 10.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-22B-RAG-GGUF/resolve/main/Mermaid-Llama-22B-RAG.Q3_K_M.gguf) | Q3_K_M | 10.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-22B-RAG-GGUF/resolve/main/Mermaid-Llama-22B-RAG.Q3_K_L.gguf) | Q3_K_L | 11.7 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-22B-RAG-GGUF/resolve/main/Mermaid-Llama-22B-RAG.IQ4_XS.gguf) | IQ4_XS | 11.9 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-22B-RAG-GGUF/resolve/main/Mermaid-Llama-22B-RAG.Q4_K_S.gguf) | Q4_K_S | 12.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-22B-RAG-GGUF/resolve/main/Mermaid-Llama-22B-RAG.Q4_K_M.gguf) | Q4_K_M | 13.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-22B-RAG-GGUF/resolve/main/Mermaid-Llama-22B-RAG.Q5_K_S.gguf) | Q5_K_S | 15.1 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-22B-RAG-GGUF/resolve/main/Mermaid-Llama-22B-RAG.Q5_K_M.gguf) | Q5_K_M | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-22B-RAG-GGUF/resolve/main/Mermaid-Llama-22B-RAG.Q6_K.gguf) | Q6_K | 18.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Mermaid-Llama-22B-RAG-GGUF/resolve/main/Mermaid-Llama-22B-RAG.Q8_0.gguf) | Q8_0 | 23.3 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/L3.1-Magnusv2-10B-i1-GGUF | mradermacher | "2024-12-26T05:27:03Z" | 72 | 1 | transformers | [
"transformers",
"gguf",
"en",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-12-25T19:25:28Z" | ---
base_model: kromcomp/L3.1-Magnusv2-10B
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/kromcomp/L3.1-Magnusv2-10B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-IQ1_S.gguf) | i1-IQ1_S | 2.6 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-IQ1_M.gguf) | i1-IQ1_M | 2.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-IQ2_S.gguf) | i1-IQ2_S | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-IQ2_M.gguf) | i1-IQ2_M | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.8 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-Q2_K.gguf) | i1-Q2_K | 4.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.7 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-IQ3_S.gguf) | i1-IQ3_S | 4.7 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-IQ3_M.gguf) | i1-IQ3_M | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 5.2 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-Q4_0.gguf) | i1-Q4_0 | 6.0 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 6.0 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 6.0 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 6.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-Q4_1.gguf) | i1-Q4_1 | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 7.2 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 7.4 | |
| [GGUF](https://huggingface.co/mradermacher/L3.1-Magnusv2-10B-i1-GGUF/resolve/main/L3.1-Magnusv2-10B.i1-Q6_K.gguf) | i1-Q6_K | 8.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
FounderOfHuggingface/gpt2_gen_lora_r16_dbpedia_14_t300_e5_non_member_shadow18 | FounderOfHuggingface | "2023-12-20T12:35:46Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:openai-community/gpt2",
"base_model:adapter:openai-community/gpt2",
"region:us"
] | null | "2023-12-20T12:35:43Z" | ---
library_name: peft
base_model: gpt2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.1 |
rohitp1/dgx1_whisper_small_finetune_teacher_babble_noise_mozilla_40_epochs_batch_8 | rohitp1 | "2023-03-14T01:11:44Z" | 77 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-03-06T14:45:57Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: dgx1_whisper_small_finetune_teacher_babble_noise_mozilla_40_epochs_batch_8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dgx1_whisper_small_finetune_teacher_babble_noise_mozilla_40_epochs_batch_8
This model is a fine-tuned version of [openai/whisper-small.en](https://huggingface.co/openai/whisper-small.en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7176
- Wer: 29.1345
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2059 | 14.7 | 500 | 0.7073 | 31.1921 |
| 0.0023 | 29.41 | 1000 | 0.7176 | 29.1345 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.13.2
|
evannaderi/distilbert-base-uncased-finetuned-emotion | evannaderi | "2024-02-27T04:52:41Z" | 118 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-02-27T01:48:52Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.933
- name: F1
type: f1
value: 0.932933898333218
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1561
- Accuracy: 0.933
- F1: 0.9329
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.1706 | 0.9265 | 0.9265 |
| No log | 2.0 | 500 | 0.1561 | 0.933 | 0.9329 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
ivan100096/Mixtral_Alpace_v2 | ivan100096 | "2024-03-01T12:09:09Z" | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:mistralai/Mixtral-8x7B-v0.1",
"base_model:adapter:mistralai/Mixtral-8x7B-v0.1",
"license:apache-2.0",
"region:us"
] | null | "2024-03-01T12:07:28Z" | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mixtral-8x7B-v0.1
model-index:
- name: Mixtral_Alpace_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mixtral_Alpace_v2
This model is a fine-tuned version of [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5741
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 0.03
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.03 | 50 | 1.5982 |
| 1.5364 | 0.06 | 100 | 1.5741 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 |
lesso03/85c35395-66cb-44c2-961d-a509c5db6503 | lesso03 | "2025-02-15T00:13:13Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B",
"base_model:adapter:Qwen/Qwen2.5-7B",
"license:apache-2.0",
"region:us"
] | null | "2025-02-14T21:56:28Z" | ---
library_name: peft
license: apache-2.0
base_model: Qwen/Qwen2.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 85c35395-66cb-44c2-961d-a509c5db6503
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 85c35395-66cb-44c2-961d-a509c5db6503
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000203
- train_batch_size: 4
- eval_batch_size: 4
- seed: 30
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0000 | 1 | 2.3956 |
| 2.0846 | 0.0008 | 50 | 1.9741 |
| 1.7834 | 0.0016 | 100 | 2.1332 |
| 2.0425 | 0.0024 | 150 | 1.9915 |
| 1.9821 | 0.0032 | 200 | 2.0586 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Malaika/ppo-LunarLander-v2-2 | Malaika | "2023-06-18T17:14:56Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-06-18T17:14:32Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.85 +/- 12.12
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Erpix3lt/WADF_Dreambooth_SuitZebraPrint | Erpix3lt | "2023-02-03T14:59:01Z" | 5 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-02-03T14:48:04Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### suit_zebra_print-dreambooth Dreambooth model trained by Erpix3lt with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
facebook/dpr-reader-single-nq-base | facebook | "2022-12-21T15:19:45Z" | 15,579 | 2 | transformers | [
"transformers",
"pytorch",
"tf",
"dpr",
"en",
"dataset:nq_open",
"arxiv:2004.04906",
"arxiv:1702.08734",
"arxiv:1910.09700",
"license:cc-by-nc-4.0",
"region:us"
] | null | "2022-03-02T23:29:05Z" | ---
language: en
license: cc-by-nc-4.0
tags:
- dpr
datasets:
- nq_open
inference: false
---
`dpr-reader-single-nq-base`
# Table of Contents
- [Model Details](#model-details)
- [How To Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)
- [Evaluation](#evaluation-results)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications](#technical-specifications)
- [Citation Information](#citation-information)
- [Model Card Authors](#model-card-authors)
## Model Details
**Model Description:** [Dense Passage Retrieval (DPR)](https://github.com/facebookresearch/DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. `dpr-reader-single-nq-base` is the reader model trained using the [Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open) ([Lee et al., 2019](https://aclanthology.org/P19-1612/); [Kwiatkowski et al., 2019](https://aclanthology.org/Q19-1026/)).
- **Developed by:** See [GitHub repo](https://github.com/facebookresearch/DPR) for model developers
- **Model Type:** QA Reader Model
- **Language(s):** [CC-BY-NC-4.0](https://github.com/facebookresearch/DPR/blob/main/LICENSE), also see [Code of Conduct](https://github.com/facebookresearch/DPR/blob/main/CODE_OF_CONDUCT.md)
- **License:** English
- **Related Models:**
- [`dpr-ctx_encoder-single-nq-base`](https://huggingface.co/facebook/dpr-ctx_encoder-single-nq-base)
- [`dpr-question_encoder-single-nq-base`](https://huggingface.co/facebook/dpr-question_encoder-single-nq-base)
- [`dpr-ctx_encoder-multiset-base`](https://huggingface.co/facebook/dpr-ctx_encoder-multiset-base)
- [`dpr-question_encoder-multiset-base`](https://huggingface.co/facebook/dpr-question_encoder-multiset-base)
- [`dpr-reader-multiset-base`](https://huggingface.co/facebook/dpr-reader-multiset-base)
- **Resources for more information:**
- [Research Paper](https://arxiv.org/abs/2004.04906)
- [GitHub Repo](https://github.com/facebookresearch/DPR)
- [Hugging Face DPR docs](https://huggingface.co/docs/transformers/main/en/model_doc/dpr)
- [BERT Base Uncased Model Card](https://huggingface.co/bert-base-uncased)
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import DPRReader, DPRReaderTokenizer
tokenizer = DPRReaderTokenizer.from_pretrained("facebook/dpr-reader-single-nq-base")
model = DPRReader.from_pretrained("facebook/dpr-reader-single-nq-base")
encoded_inputs = tokenizer(
questions=["What is love ?"],
titles=["Haddaway"],
texts=["'What Is Love' is a song recorded by the artist Haddaway"],
return_tensors="pt",
)
outputs = model(**encoded_inputs)
start_logits = outputs.start_logits
end_logits = outputs.end_logits
relevance_logits = outputs.relevance_logits
```
## Uses
#### Direct Use
`dpr-reader-single-nq-base`, [`dpr-ctx_encoder-single-nq-base`](https://huggingface.co/facebook/dpr-ctx_encoder-single-nq-base), and [`dpr-question_encoder-single-nq-base`](https://huggingface.co/facebook/dpr-question_encoder-single-nq-base) can be used for the task of open-domain question answering.
#### Misuse and Out-of-scope Use
The model should not be used to intentionally create hostile or alienating environments for people. In addition, the set of DPR models was not trained to be factual or true representations of people or events, and therefore using the models to generate such content is out-of-scope for the abilities of this model.
## Risks, Limitations and Biases
**CONTENT WARNING: Readers should be aware this section may contain content that is disturbing, offensive, and can propogate historical and current stereotypes.**
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al., 2021](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al., 2021](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model can include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Training
#### Training Data
This model was trained using the [Natural Questions (NQ) dataset](https://huggingface.co/datasets/nq_open) ([Lee et al., 2019](https://aclanthology.org/P19-1612/); [Kwiatkowski et al., 2019](https://aclanthology.org/Q19-1026/)). The model authors write that:
> [The dataset] was designed for end-to-end question answering. The questions were mined from real Google search queries and the answers were spans in Wikipedia articles identified by annotators.
#### Training Procedure
The training procedure is described in the [associated paper](https://arxiv.org/pdf/2004.04906.pdf):
> Given a collection of M text passages, the goal of our dense passage retriever (DPR) is to index all the passages in a low-dimensional and continuous space, such that it can retrieve efficiently the top k passages relevant to the input question for the reader at run-time.
> Our dense passage retriever (DPR) uses a dense encoder EP(·) which maps any text passage to a d- dimensional real-valued vectors and builds an index for all the M passages that we will use for retrieval. At run-time, DPR applies a different encoder EQ(·) that maps the input question to a d-dimensional vector, and retrieves k passages of which vectors are the closest to the question vector.
The authors report that for encoders, they used two independent BERT ([Devlin et al., 2019](https://aclanthology.org/N19-1423/)) networks (base, un-cased) and use FAISS ([Johnson et al., 2017](https://arxiv.org/abs/1702.08734)) during inference time to encode and index passages. See the paper for further details on training, including encoders, inference, positive and negative passages, and in-batch negatives.
## Evaluation
The following evaluation information is extracted from the [associated paper](https://arxiv.org/pdf/2004.04906.pdf).
#### Testing Data, Factors and Metrics
The model developers report the performance of the model on five QA datasets, using the top-k accuracy (k ∈ {20, 100}). The datasets were [NQ](https://huggingface.co/datasets/nq_open), [TriviaQA](https://huggingface.co/datasets/trivia_qa), [WebQuestions (WQ)](https://huggingface.co/datasets/web_questions), [CuratedTREC (TREC)](https://huggingface.co/datasets/trec), and [SQuAD v1.1](https://huggingface.co/datasets/squad).
#### Results
| | Top 20 | | | | | Top 100| | | | |
|:----:|:------:|:---------:|:--:|:----:|:-----:|:------:|:---------:|:--:|:----:|:-----:|
| | NQ | TriviaQA | WQ | TREC | SQuAD | NQ | TriviaQA | WQ | TREC | SQuAD |
| | 78.4 | 79.4 |73.2| 79.8 | 63.2 | 85.4 | 85.0 |81.4| 89.1 | 77.2 |
## Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). We present the hardware type and based on the [associated paper](https://arxiv.org/abs/2004.04906).
- **Hardware Type:** 8 32GB GPUs
- **Hours used:** Unknown
- **Cloud Provider:** Unknown
- **Compute Region:** Unknown
- **Carbon Emitted:** Unknown
## Technical Specifications
See the [associated paper](https://arxiv.org/abs/2004.04906) for details on the modeling architecture, objective, compute infrastructure, and training details.
## Citation Information
```bibtex
@inproceedings{karpukhin-etal-2020-dense,
title = "Dense Passage Retrieval for Open-Domain Question Answering",
author = "Karpukhin, Vladimir and Oguz, Barlas and Min, Sewon and Lewis, Patrick and Wu, Ledell and Edunov, Sergey and Chen, Danqi and Yih, Wen-tau",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = nov,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-main.550",
doi = "10.18653/v1/2020.emnlp-main.550",
pages = "6769--6781",
}
```
## Model Card Authors
This model card was written by the team at Hugging Face. |
mradermacher/RandomMergeNoNorm-7B-DARETIES-i1-GGUF | mradermacher | "2024-12-20T13:04:25Z" | 14 | 1 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"FelixChao/WestSeverus-7B-DPO-v2",
"CultriX/Wernicke-7B-v9",
"mlabonne/NeuralBeagle14-7B",
"en",
"base_model:jsfs11/RandomMergeNoNorm-7B-DARETIES",
"base_model:quantized:jsfs11/RandomMergeNoNorm-7B-DARETIES",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix"
] | null | "2024-12-20T11:41:51Z" | ---
base_model: jsfs11/RandomMergeNoNorm-7B-DARETIES
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- FelixChao/WestSeverus-7B-DPO-v2
- CultriX/Wernicke-7B-v9
- mlabonne/NeuralBeagle14-7B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jsfs11/RandomMergeNoNorm-7B-DARETIES
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/RandomMergeNoNorm-7B-DARETIES-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/RandomMergeNoNorm-7B-DARETIES-i1-GGUF/resolve/main/RandomMergeNoNorm-7B-DARETIES.i1-IQ1_S.gguf) | i1-IQ1_S | 1.7 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/RandomMergeNoNorm-7B-DARETIES-i1-GGUF/resolve/main/RandomMergeNoNorm-7B-DARETIES.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/RandomMergeNoNorm-7B-DARETIES-i1-GGUF/resolve/main/RandomMergeNoNorm-7B-DARETIES.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/RandomMergeNoNorm-7B-DARETIES-i1-GGUF/resolve/main/RandomMergeNoNorm-7B-DARETIES.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/RandomMergeNoNorm-7B-DARETIES-i1-GGUF/resolve/main/RandomMergeNoNorm-7B-DARETIES.i1-IQ2_S.gguf) | i1-IQ2_S | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/RandomMergeNoNorm-7B-DARETIES-i1-GGUF/resolve/main/RandomMergeNoNorm-7B-DARETIES.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/RandomMergeNoNorm-7B-DARETIES-i1-GGUF/resolve/main/RandomMergeNoNorm-7B-DARETIES.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/RandomMergeNoNorm-7B-DARETIES-i1-GGUF/resolve/main/RandomMergeNoNorm-7B-DARETIES.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/RandomMergeNoNorm-7B-DARETIES-i1-GGUF/resolve/main/RandomMergeNoNorm-7B-DARETIES.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/RandomMergeNoNorm-7B-DARETIES-i1-GGUF/resolve/main/RandomMergeNoNorm-7B-DARETIES.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/RandomMergeNoNorm-7B-DARETIES-i1-GGUF/resolve/main/RandomMergeNoNorm-7B-DARETIES.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.3 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/RandomMergeNoNorm-7B-DARETIES-i1-GGUF/resolve/main/RandomMergeNoNorm-7B-DARETIES.i1-IQ3_S.gguf) | i1-IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/RandomMergeNoNorm-7B-DARETIES-i1-GGUF/resolve/main/RandomMergeNoNorm-7B-DARETIES.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/RandomMergeNoNorm-7B-DARETIES-i1-GGUF/resolve/main/RandomMergeNoNorm-7B-DARETIES.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/RandomMergeNoNorm-7B-DARETIES-i1-GGUF/resolve/main/RandomMergeNoNorm-7B-DARETIES.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/RandomMergeNoNorm-7B-DARETIES-i1-GGUF/resolve/main/RandomMergeNoNorm-7B-DARETIES.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/RandomMergeNoNorm-7B-DARETIES-i1-GGUF/resolve/main/RandomMergeNoNorm-7B-DARETIES.i1-Q4_0.gguf) | i1-Q4_0 | 4.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/RandomMergeNoNorm-7B-DARETIES-i1-GGUF/resolve/main/RandomMergeNoNorm-7B-DARETIES.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/RandomMergeNoNorm-7B-DARETIES-i1-GGUF/resolve/main/RandomMergeNoNorm-7B-DARETIES.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/RandomMergeNoNorm-7B-DARETIES-i1-GGUF/resolve/main/RandomMergeNoNorm-7B-DARETIES.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/RandomMergeNoNorm-7B-DARETIES-i1-GGUF/resolve/main/RandomMergeNoNorm-7B-DARETIES.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/RandomMergeNoNorm-7B-DARETIES-i1-GGUF/resolve/main/RandomMergeNoNorm-7B-DARETIES.i1-Q6_K.gguf) | i1-Q6_K | 6.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
gsmafoundry/AINA | gsmafoundry | "2025-04-02T13:47:43Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-02T13:47:41Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
mradermacher/niistorm-GGUF | mradermacher | "2025-01-08T20:06:16Z" | 24 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:karrelin/niistorm",
"base_model:quantized:karrelin/niistorm",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-08T19:14:18Z" | ---
base_model: karrelin/niistorm
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/karrelin/niistorm
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/niistorm-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/niistorm-GGUF/resolve/main/niistorm.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/niistorm-GGUF/resolve/main/niistorm.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/niistorm-GGUF/resolve/main/niistorm.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/niistorm-GGUF/resolve/main/niistorm.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/niistorm-GGUF/resolve/main/niistorm.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/niistorm-GGUF/resolve/main/niistorm.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/niistorm-GGUF/resolve/main/niistorm.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/niistorm-GGUF/resolve/main/niistorm.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/niistorm-GGUF/resolve/main/niistorm.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/niistorm-GGUF/resolve/main/niistorm.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/niistorm-GGUF/resolve/main/niistorm.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/niistorm-GGUF/resolve/main/niistorm.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
nicomp/myModel | nicomp | "2023-12-26T23:57:23Z" | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"text-classification",
"en",
"dataset:fka/awesome-chatgpt-prompts",
"license:mit",
"region:us"
] | text-classification | "2023-12-26T23:44:28Z" | ---
license: mit
datasets:
- fka/awesome-chatgpt-prompts
language:
- en
metrics:
- accuracy
library_name: adapter-transformers
pipeline_tag: text-classification
--- |
kuttersn/test-clm | kuttersn | "2022-07-15T02:04:32Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-07-13T16:51:06Z" | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: test-clm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-clm
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5311
- Accuracy: 0.3946
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
vsem-azamat/rubert-tiny-spam-classifier | vsem-azamat | "2025-03-07T00:08:52Z" | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | "2025-03-06T23:56:35Z" | # RuBERT Tiny Spam Classifier 🤖
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
[](https://huggingface.co/cointegrated/rubert-tiny)
A lightweight Russian language spam classifier based on RuBERT Tiny model. The model can detect spam in text messages with high accuracy while maintaining minimal resource requirements.
## 📋 Requirements
```bash
pip install --upgrade pip
pip install -r requirements.txt
```
## 🗂️ Project Structure
```
.
├── cleaned_dataset.csv # Cleaned dataset
├── messages.csv # Source dataset
├── model/ # Trained model
├── scripts/
│ ├── inference.py # Inference script
│ ├── preprocess.ipynb # Data preprocessing notebook
│ └── train.py # Training script
└── requirements.txt # Project dependencies
```
*Datasets will not be published at this time.*
## 🎯 Usage
### Train
```bash
python3 scripts/train.py
```
### Inference
```python
# python3 scripts/inference.py
from scripts.inference import SpamClassifier
classifier = SpamClassifier(model_path="./model")
text = "Привет, у меня есть подработка, оплата 100 сек руб.
is_spam = classifier.classify(text)
print(f"Is spam: {is_spam}")
```
## 📈 Metrics
- Accuracy: 0.99
- Precision: 0.89
- Recall: 0.96
- F1 Score: 0.92
## 🔍 Model
The classifier is based on [RuBERT Tiny](https://huggingface.co/cointegrated/rubert-tiny) - a lightweight version of RuBERT, optimized for running on low-resource machines. The model is fine-tuned on a dataset of Russian messages for spam classification.
## 📝 License
[MIT License](LICENSE)
|
mradermacher/finetuned_mental_health_distilgpt2-GGUF | mradermacher | "2025-03-01T00:59:00Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"generated_from_trainer",
"trl",
"sft",
"en",
"base_model:JordiOrtega/finetuned_mental_health_distilgpt2",
"base_model:quantized:JordiOrtega/finetuned_mental_health_distilgpt2",
"endpoints_compatible",
"region:us"
] | null | "2025-02-28T18:08:58Z" | ---
base_model: JordiOrtega/finetuned_mental_health_distilgpt2
language:
- en
library_name: transformers
model_name: finetuned_mental_health_distilgpt2
quantized_by: mradermacher
tags:
- generated_from_trainer
- trl
- sft
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/JordiOrtega/finetuned_mental_health_distilgpt2
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/finetuned_mental_health_distilgpt2-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/finetuned_mental_health_distilgpt2-GGUF/resolve/main/finetuned_mental_health_distilgpt2.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/finetuned_mental_health_distilgpt2-GGUF/resolve/main/finetuned_mental_health_distilgpt2.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/finetuned_mental_health_distilgpt2-GGUF/resolve/main/finetuned_mental_health_distilgpt2.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/finetuned_mental_health_distilgpt2-GGUF/resolve/main/finetuned_mental_health_distilgpt2.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/finetuned_mental_health_distilgpt2-GGUF/resolve/main/finetuned_mental_health_distilgpt2.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/finetuned_mental_health_distilgpt2-GGUF/resolve/main/finetuned_mental_health_distilgpt2.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/finetuned_mental_health_distilgpt2-GGUF/resolve/main/finetuned_mental_health_distilgpt2.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/finetuned_mental_health_distilgpt2-GGUF/resolve/main/finetuned_mental_health_distilgpt2.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/finetuned_mental_health_distilgpt2-GGUF/resolve/main/finetuned_mental_health_distilgpt2.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/finetuned_mental_health_distilgpt2-GGUF/resolve/main/finetuned_mental_health_distilgpt2.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/finetuned_mental_health_distilgpt2-GGUF/resolve/main/finetuned_mental_health_distilgpt2.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/finetuned_mental_health_distilgpt2-GGUF/resolve/main/finetuned_mental_health_distilgpt2.f16.gguf) | f16 | 0.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
hopkins/eng-kor-wsample.50 | hopkins | "2023-07-04T22:59:51Z" | 109 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2023-07-04T22:45:49Z" | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-kor-wsample.50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-kor-wsample.50
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9913
- Bleu: 7.0488
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Build-Your-AI-Future/MindEcho | Build-Your-AI-Future | "2025-04-07T11:35:41Z" | 0 | 0 | null | [
"safetensors",
"llama",
"unsloth",
"trl",
"sft",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-21T13:40:41Z" | ---
license: mit
tags:
- unsloth
- trl
- sft
---
|
Paladiso/7ca82843-4700-4f5b-9d27-f894f2468ad6 | Paladiso | "2025-02-24T18:20:59Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NousResearch/CodeLlama-13b-hf-flash",
"base_model:adapter:NousResearch/CodeLlama-13b-hf-flash",
"region:us"
] | null | "2025-02-24T18:04:33Z" | ---
library_name: peft
base_model: NousResearch/CodeLlama-13b-hf-flash
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7ca82843-4700-4f5b-9d27-f894f2468ad6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/CodeLlama-13b-hf-flash
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 79470adead26199e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/79470adead26199e_train_data.json
type:
field_input: prompt
field_instruction: instruction
field_output: response
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: Paladiso/7ca82843-4700-4f5b-9d27-f894f2468ad6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/79470adead26199e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: f61089a6-8af4-458f-b8eb-028b46eee753
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: f61089a6-8af4-458f-b8eb-028b46eee753
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7ca82843-4700-4f5b-9d27-f894f2468ad6
This model is a fine-tuned version of [NousResearch/CodeLlama-13b-hf-flash](https://huggingface.co/NousResearch/CodeLlama-13b-hf-flash) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6436
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.5193 | 0.0002 | 1 | 0.6991 |
| 3.1288 | 0.0005 | 3 | 0.6986 |
| 2.6177 | 0.0010 | 6 | 0.6912 |
| 2.8098 | 0.0015 | 9 | 0.6436 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
bartowski/juanako-7b-v1-exl2 | bartowski | "2023-11-27T06:28:16Z" | 1 | 0 | null | [
"alignment-handbook",
"generated_from_trainer",
"text-generation",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"license:artistic-2.0",
"region:us"
] | text-generation | "2023-11-27T04:41:13Z" | ---
base_model: fblgit/zephyr-lora-dpo-b1
tags:
- alignment-handbook
- generated_from_trainer
datasets:
- HuggingFaceH4/ultrafeedback_binarized
model-index:
- name: juanako-7b-v1
results: []
license: artistic-2.0
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of juanako-7b-v1
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.9">turboderp's ExLlamaV2 v0.0.9</a> for quantization.
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Conversion was done using wikitext-103-raw-v1-test.parquet as calibration dataset.
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
Original model: https://huggingface.co/fblgit/juanako-7b-v1
<a href="https://huggingface.co/bartowski/juanako-7b-v1-exl2/tree/4_0">4.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/juanako-7b-v1-exl2/tree/5_0">5.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/juanako-7b-v1-exl2/tree/6_0">6.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/juanako-7b-v1-exl2/tree/8_0">8.0 bits per weight</a>
## Download instructions
With git:
```shell
git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/juanako-7b-v1-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `juanako-7b-v1-exl2`:
```shell
mkdir juanako-7b-v1-exl2
huggingface-cli download bartowski/juanako-7b-v1-exl2 --local-dir juanako-7b-v1-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir juanako-7b-v1-exl2
huggingface-cli download bartowski/juanako-7b-v1-exl2 --revision 4_0 --local-dir juanako-7b-v1-exl2 --local-dir-use-symlinks False
```
|
prosa-text/climate-topic | prosa-text | "2025-03-03T09:03:28Z" | 3 | 0 | null | [
"pytorch",
"min",
"ban",
"bug",
"id",
"license:cc-by-sa-4.0",
"region:us"
] | null | "2024-12-05T00:18:35Z" | ---
language:
- min
- ban
- bug
- id
pretty_name: Climate Topic
license: cc-by-sa-4.0
---
## Licensing Information
The dataset is released under the terms of **CC-BY-SA 4.0**.
By using this dataset, you are also bound to the respective Terms of Use and License of the dataset.
For commercial use in small businesses and startups, please contact us ([email protected]) for permission to use the datasets by informing company profile and propose of usage. |
el254/Ride | el254 | "2023-06-20T21:26:23Z" | 0 | 0 | keras | [
"keras",
"region:us"
] | null | "2023-06-20T19:38:14Z" | ---
library_name: keras
---
---
library_name: keras
--
# Распознавание класса цифр на датасете mnist.
# Задача НС
Модель генерирует цифру похожую на цифру из датасета mnist
## Изображение послойной архитектуры:
.png)
## Общее количество обучаемых параметров
Обучемых параметров: 54,160
## Используемые алгоритмы оптимизации и функция ошибки
Алгоритм оптимизации - `adam`
Функция ошибки - `categorical_crossentropy`
## Размеры тренировочного, валидационного и тестового датасетов:
Тренировочный: 60000
Тестовый: 10000
Валидационный(тестовый): 10000
## Результаты обучения модели: loss и accuracy на всех трёх датасетах:
Train Loss: 2511.731201171875
Train Accuracy: 0.7256483435630798
Test Loss: 2534.3447265625
Test Accuracy: 0.7262243628501892
Validation Loss: 2534.3447265625
Validation Accuracy: 0.7262243628501892 |
XdSlams/fjhqgwkjwehhrfgir28 | XdSlams | "2023-04-26T13:06:23Z" | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-04-26T07:09:24Z" | ---
license: creativeml-openrail-m
---
|
CentralogicAITeam/demo-florence-model-v03 | CentralogicAITeam | "2024-07-10T11:15:33Z" | 107 | 0 | transformers | [
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-07-10T11:12:19Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
NikolayKozloff/Llama-3.1-Nemotron-Nano-8B-v1-Q8_0-GGUF | NikolayKozloff | "2025-03-19T14:04:27Z" | 0 | 1 | transformers | [
"transformers",
"gguf",
"nvidia",
"llama-3",
"pytorch",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:nvidia/Llama-3.1-Nemotron-Nano-8B-v1",
"base_model:quantized:nvidia/Llama-3.1-Nemotron-Nano-8B-v1",
"license:other",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-03-19T14:03:49Z" | ---
base_model: nvidia/Llama-3.1-Nemotron-Nano-8B-v1
language:
- en
library_name: transformers
license: other
license_name: nvidia-open-model-license
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
pipeline_tag: text-generation
tags:
- nvidia
- llama-3
- pytorch
- llama-cpp
- gguf-my-repo
---
# NikolayKozloff/Llama-3.1-Nemotron-Nano-8B-v1-Q8_0-GGUF
This model was converted to GGUF format from [`nvidia/Llama-3.1-Nemotron-Nano-8B-v1`](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/nvidia/Llama-3.1-Nemotron-Nano-8B-v1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo NikolayKozloff/Llama-3.1-Nemotron-Nano-8B-v1-Q8_0-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo NikolayKozloff/Llama-3.1-Nemotron-Nano-8B-v1-Q8_0-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo NikolayKozloff/Llama-3.1-Nemotron-Nano-8B-v1-Q8_0-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo NikolayKozloff/Llama-3.1-Nemotron-Nano-8B-v1-Q8_0-GGUF --hf-file llama-3.1-nemotron-nano-8b-v1-q8_0.gguf -c 2048
```
|
llama-duo/gemma7b-summarize-gemini1_5flash-1k | llama-duo | "2024-06-13T09:20:56Z" | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"gemma",
"alignment-handbook",
"generated_from_trainer",
"dataset:llama-duo/synth_summarize_dataset_dedup",
"base_model:google/gemma-7b",
"base_model:adapter:google/gemma-7b",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2024-06-13T09:15:14Z" | ---
library_name: peft
tags:
- alignment-handbook
- generated_from_trainer
datasets:
- llama-duo/synth_summarize_dataset_dedup
base_model: google/gemma-7b
model-index:
- name: gemma7b-summarize-gemini1_5flash-1k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma7b-summarize-gemini1_5flash-1k
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the llama-duo/synth_summarize_dataset_dedup dataset.
It achieves the following results on the evaluation set:
- Loss: 8.7240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 51.5906 | 1.0 | 2 | 16.5290 |
| 51.5906 | 2.0 | 4 | 14.1666 |
| 38.4458 | 3.0 | 6 | 13.0907 |
| 38.4458 | 4.0 | 8 | 11.6308 |
| 23.9261 | 5.0 | 10 | 10.3576 |
| 23.9261 | 6.0 | 12 | 9.4846 |
| 23.9261 | 7.0 | 14 | 9.0308 |
| 20.7948 | 8.0 | 16 | 8.8035 |
| 20.7948 | 9.0 | 18 | 8.7407 |
| 20.2787 | 10.0 | 20 | 8.7240 |
### Framework versions
- PEFT 0.10.0
- Transformers 4.40.0
- Pytorch 2.1.2+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1 |
mradermacher/Qwen2.5-Coder-14B-YOYO-GGUF | mradermacher | "2025-03-23T23:44:35Z" | 248 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:YOYO-AI/Qwen2.5-Coder-14B-YOYO",
"base_model:quantized:YOYO-AI/Qwen2.5-Coder-14B-YOYO",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-20T23:32:32Z" | ---
base_model: YOYO-AI/Qwen2.5-Coder-14B-YOYO
language:
- en
library_name: transformers
no_imatrix: '[43]7.1353,nan detected in blk.47.attn_q.weight'
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/YOYO-AI/Qwen2.5-Coder-14B-YOYO
<!-- provided-files -->
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-YOYO-GGUF/resolve/main/Qwen2.5-Coder-14B-YOYO.Q2_K.gguf) | Q2_K | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-YOYO-GGUF/resolve/main/Qwen2.5-Coder-14B-YOYO.Q3_K_S.gguf) | Q3_K_S | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-YOYO-GGUF/resolve/main/Qwen2.5-Coder-14B-YOYO.Q3_K_M.gguf) | Q3_K_M | 7.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-YOYO-GGUF/resolve/main/Qwen2.5-Coder-14B-YOYO.Q3_K_L.gguf) | Q3_K_L | 8.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-YOYO-GGUF/resolve/main/Qwen2.5-Coder-14B-YOYO.IQ4_XS.gguf) | IQ4_XS | 8.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-YOYO-GGUF/resolve/main/Qwen2.5-Coder-14B-YOYO.Q4_K_S.gguf) | Q4_K_S | 8.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-YOYO-GGUF/resolve/main/Qwen2.5-Coder-14B-YOYO.Q4_K_M.gguf) | Q4_K_M | 9.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-YOYO-GGUF/resolve/main/Qwen2.5-Coder-14B-YOYO.Q5_K_S.gguf) | Q5_K_S | 10.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-YOYO-GGUF/resolve/main/Qwen2.5-Coder-14B-YOYO.Q5_K_M.gguf) | Q5_K_M | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-YOYO-GGUF/resolve/main/Qwen2.5-Coder-14B-YOYO.Q6_K.gguf) | Q6_K | 12.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-Coder-14B-YOYO-GGUF/resolve/main/Qwen2.5-Coder-14B-YOYO.Q8_0.gguf) | Q8_0 | 15.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
timm/efficientvit_m4.r224_in1k | timm | "2025-01-21T19:20:14Z" | 308 | 0 | timm | [
"timm",
"pytorch",
"safetensors",
"image-classification",
"transformers",
"dataset:imagenet-1k",
"arxiv:2305.07027",
"license:mit",
"region:us"
] | image-classification | "2023-08-18T23:21:48Z" | ---
tags:
- image-classification
- timm
- transformers
library_name: timm
license: mit
datasets:
- imagenet-1k
---
# Model card for efficientvit_m4.r224_in1k
An EfficientViT (MSRA) image classification model. Trained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 8.8
- GMACs: 0.3
- Activations (M): 1.7
- Image size: 224 x 224
- **Papers:**
- EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention: https://arxiv.org/abs/2305.07027
- **Dataset:** ImageNet-1k
- **Original:** https://github.com/microsoft/Cream/tree/main/EfficientViT
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('efficientvit_m4.r224_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'efficientvit_m4.r224_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 128, 14, 14])
# torch.Size([1, 256, 7, 7])
# torch.Size([1, 384, 4, 4])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'efficientvit_m4.r224_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 384, 4, 4) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
## Citation
```bibtex
@InProceedings{liu2023efficientvit,
title = {EfficientViT: Memory Efficient Vision Transformer with Cascaded Group Attention},
author = {Liu, Xinyu and Peng, Houwen and Zheng, Ningxin and Yang, Yuqing and Hu, Han and Yuan, Yixuan},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2023},
}
```
|
developer-flyward/qwen2-7b-instruct-trl-sft-ChartQA | developer-flyward | "2025-03-03T23:43:38Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-02-04T00:03:55Z" | ---
base_model: Qwen/Qwen2-VL-7B-Instruct
library_name: transformers
model_name: qwen2-7b-instruct-trl-sft-ChartQA
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for qwen2-7b-instruct-trl-sft-ChartQA
This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="developer-flyward/qwen2-7b-instruct-trl-sft-ChartQA", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/flyward/qwen2-7b-instruct-trl-sft-ChartQA/runs/k79gj36u)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.46.3
- Pytorch: 2.4.1+cu121
- Datasets: 3.3.2
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
gavrilstep/2014d85a-bef2-4304-9a8b-f7e892f2bdf6 | gavrilstep | "2025-01-25T07:33:52Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"gpt_neo",
"axolotl",
"generated_from_trainer",
"base_model:EleutherAI/gpt-neo-1.3B",
"base_model:adapter:EleutherAI/gpt-neo-1.3B",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-25T07:33:02Z" | ---
library_name: peft
license: mit
base_model: EleutherAI/gpt-neo-1.3B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2014d85a-bef2-4304-9a8b-f7e892f2bdf6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: EleutherAI/gpt-neo-1.3B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 414256b99bc71583_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/414256b99bc71583_train_data.json
type:
field_input: choices
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device: cuda
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 5
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: gavrilstep/2014d85a-bef2-4304-9a8b-f7e892f2bdf6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 75GiB
max_steps: 30
micro_batch_size: 2
mlflow_experiment_name: /tmp/414256b99bc71583_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 10
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: true
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a9ee1f6f-a2ee-46d2-8825-32d1c8a14f27
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a9ee1f6f-a2ee-46d2-8825-32d1c8a14f27
warmup_steps: 10
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 2014d85a-bef2-4304-9a8b-f7e892f2bdf6
This model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3590
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0408 | 1 | 2.0149 |
| 8.3085 | 0.2041 | 5 | 1.9805 |
| 8.1239 | 0.4082 | 10 | 1.7718 |
| 6.2674 | 0.6122 | 15 | 1.4947 |
| 5.9176 | 0.8163 | 20 | 1.3810 |
| 5.4735 | 1.0204 | 25 | 1.3590 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ClarenceDan/cb1c1d46-9ca4-4c3a-8e7a-8c4384b6c83a | ClarenceDan | "2025-03-06T01:43:55Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:berkeley-nest/Starling-LM-7B-alpha",
"base_model:adapter:berkeley-nest/Starling-LM-7B-alpha",
"license:apache-2.0",
"region:us"
] | null | "2025-03-06T01:19:39Z" | ---
library_name: peft
license: apache-2.0
base_model: berkeley-nest/Starling-LM-7B-alpha
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cb1c1d46-9ca4-4c3a-8e7a-8c4384b6c83a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: berkeley-nest/Starling-LM-7B-alpha
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8b2360c5c0395ce2_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8b2360c5c0395ce2_train_data.json
type:
field_input: conversation
field_instruction: note
field_output: summary
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: ClarenceDan/cb1c1d46-9ca4-4c3a-8e7a-8c4384b6c83a
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/8b2360c5c0395ce2_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 207ee5c0-eb25-4c0d-802e-e5f74ee9ad16
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 207ee5c0-eb25-4c0d-802e-e5f74ee9ad16
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# cb1c1d46-9ca4-4c3a-8e7a-8c4384b6c83a
This model is a fine-tuned version of [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.8853 | 0.0003 | 1 | 0.9757 |
| 3.7292 | 0.0009 | 3 | 0.9094 |
| 2.9633 | 0.0017 | 6 | 0.6149 |
| 1.4667 | 0.0026 | 9 | 0.2722 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
emilianJR/haruna_lora | emilianJR | "2023-03-25T17:44:29Z" | 5 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2023-03-25T08:14:39Z" |
---
license: creativeml-openrail-m
base_model: andite/anything-v4.0
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
inference: true
---
# LoRA text2image fine-tuning - https://huggingface.co/kubanemil/haruna_lora
These are LoRA adaption weights for https://huggingface.co/kubanemil/haruna_lora. The weights were fine-tuned on the Haruna Sakura's images dataset. You can find some example images in the following.
|
Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-6_5bpw_exl2 | Zoyd | "2024-06-04T19:33:01Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"roleplay",
"en",
"arxiv:2212.04089",
"base_model:KatyTheCutie/LemonadeRP-4.5.3",
"base_model:merge:KatyTheCutie/LemonadeRP-4.5.3",
"base_model:SanjiWatsuki/Kunoichi-7B",
"base_model:merge:SanjiWatsuki/Kunoichi-7B",
"base_model:SanjiWatsuki/Silicon-Maid-7B",
"base_model:merge:SanjiWatsuki/Silicon-Maid-7B",
"base_model:Sao10K/Fimbulvetr-11B-v2",
"base_model:merge:Sao10K/Fimbulvetr-11B-v2",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:merge:mistralai/Mistral-7B-v0.1",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | "2024-06-04T19:17:45Z" | ---
license: cc-by-4.0
language:
- en
base_model:
- mistralai/Mistral-7B-v0.1
- SanjiWatsuki/Kunoichi-7B
- SanjiWatsuki/Silicon-Maid-7B
- KatyTheCutie/LemonadeRP-4.5.3
- Sao10K/Fimbulvetr-11B-v2
library_name: transformers
tags:
- mergekit
- merge
- mistral
- text-generation
- roleplay
model-index:
- name: Smart-Lemon-Cookie-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.62
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.55
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 61.59
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 58.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=FallenMerick/Chunky-Lemon-Cookie-11B
name: Open LLM Leaderboard
---
**Exllamav2** quant (**exl2** / **6.5 bpw**) made with ExLlamaV2 v0.1.3
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-2_2bpw_exl2)**</center> | <center>3126 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-2_5bpw_exl2)**</center> | <center>3469 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-3_0bpw_exl2)**</center> | <center>4092 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-3_5bpw_exl2)**</center> | <center>4717 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-3_75bpw_exl2)**</center> | <center>5029 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-4_0bpw_exl2)**</center> | <center>5341 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-4_25bpw_exl2)**</center> | <center>5653 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-5_0bpw_exl2)**</center> | <center>6589 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-6_0bpw_exl2)**</center> | <center>7862 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-6_5bpw_exl2)**</center> | <center>8467 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/FallenMerick_Chunky-Lemon-Cookie-11B-8_0bpw_exl2)**</center> | <center>9713 MB</center> | <center>8</center> |

# Chunky-Lemon-Cookie-11B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
GGUF quants:
* https://huggingface.co/backyardai/Chunky-Lemon-Cookie-11B-GGUF
* https://huggingface.co/mradermacher/Chunky-Lemon-Cookie-11B-GGUF
## Merge Details
### Merge Method
This model was merged using the following methods:
* passthrough
* [task arithmetic](https://arxiv.org/abs/2212.04089)
### Models Merged
The following models were included in the merge:
* [SanjiWatsuki/Kunoichi-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-7B)
* [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B)
* [KatyTheCutie/LemonadeRP-4.5.3](https://huggingface.co/KatyTheCutie/LemonadeRP-4.5.3)
* [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2)
* [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
### Configuration
The following YAML configurations were used to produce this model:
```yaml
slices:
- sources:
- model: mistralai/Mistral-7B-v0.1
layer_range: [0, 24]
- sources:
- model: mistralai/Mistral-7B-v0.1
layer_range: [8, 32]
merge_method: passthrough
dtype: float16
name: Mistral-11B
---
slices:
- sources:
- model: SanjiWatsuki/Kunoichi-7B
layer_range: [0, 24]
- sources:
- model: SanjiWatsuki/Silicon-Maid-7B
layer_range: [8, 24]
- sources:
- model: KatyTheCutie/LemonadeRP-4.5.3
layer_range: [24, 32]
merge_method: passthrough
dtype: float16
name: Big-Lemon-Cookie-11B
---
models:
- model: Big-Lemon-Cookie-11B
parameters:
weight: 0.85
- model: Sao10K/Fimbulvetr-11B-v2
parameters:
weight: 0.15
merge_method: task_arithmetic
base_model: Mistral-11B
dtype: float16
name: Chunky-Lemon-Cookie-11B
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_FallenMerick__Chunky-Lemon-Cookie-11B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |70.23|
|AI2 Reasoning Challenge (25-Shot)|69.62|
|HellaSwag (10-Shot) |86.55|
|MMLU (5-Shot) |65.35|
|TruthfulQA (0-shot) |61.59|
|Winogrande (5-shot) |79.79|
|GSM8k (5-shot) |58.45| |
Romain-XV/cb482234-a397-4fc4-975b-2fe36df3c046 | Romain-XV | "2025-04-04T08:32:15Z" | 0 | 0 | null | [
"safetensors",
"mistral",
"region:us"
] | null | "2025-04-04T06:07:14Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:[email protected]">an email</a></p>
</div>
</main>
</body>
</html> |
jo-mengr/mmcontext-geo7k-cellxgene3.5k-pairs | jo-mengr | "2025-02-20T12:32:27Z" | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:9450",
"loss:ContrastiveLoss",
"code",
"dataset:jo-mengr/geo_7k_cellxgene_3_5k_pairs",
"arxiv:1908.10084",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2025-02-20T12:32:11Z" | ---
language:
- code
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:9450
- loss:ContrastiveLoss
widget:
- source_sentence: '{"file_record": {"dataset_path": "https://nxc-fredato.imbi.uni-freiburg.de/s/EbaoL4ydTqmYwP9/download",
"embeddings": {"X_hvg": "https://nxc-fredato.imbi.uni-freiburg.de/s/X8EFSis4S5ecdse/download",
"X_pca": "https://nxc-fredato.imbi.uni-freiburg.de/s/DGxs2PkPeDF2RGm/download",
"X_scvi": "https://nxc-fredato.imbi.uni-freiburg.de/s/bm3N8RCWePiyJKz/download",
"X_geneformer": "https://nxc-fredato.imbi.uni-freiburg.de/s/8FGZG6EzMeBYxjX/download"}},
"sample_id": "census_88a01d4a-5197-45be-b5ae-e019aef43376_710"}'
sentences:
- Sample is a CD8-positive, alpha-beta T cell derived from blood of a 45-year old
European male with managed systemic lupus erythematosus (SLE). The cell exhibits
elevated expression of type 1 interferon-stimulated genes (ISGs) and reduced naïve
CD4+ T cells correlating with monocyte ISG expression, as well as an expansion
of repertoire-restricted cytotoxic GZMH+ CD8+ T cells.
- Endothelial cell from a 42-year-old male cerebral cortex tissue, specifically
from the Superior Temporal Gyrus (STG) dissection, with European ethnicity, analyzed
using nucleus suspension type.
- Sample is a lymphocyte cell type, specifically lymphatics, located in the lamina
propria of mucosa of colon, taken from a female in her third decade of life with
Crohn's disease.
- source_sentence: '{"file_record": {"dataset_path": "https://nxc-fredato.imbi.uni-freiburg.de/s/EbaoL4ydTqmYwP9/download",
"embeddings": {"X_hvg": "https://nxc-fredato.imbi.uni-freiburg.de/s/X8EFSis4S5ecdse/download",
"X_pca": "https://nxc-fredato.imbi.uni-freiburg.de/s/DGxs2PkPeDF2RGm/download",
"X_scvi": "https://nxc-fredato.imbi.uni-freiburg.de/s/bm3N8RCWePiyJKz/download",
"X_geneformer": "https://nxc-fredato.imbi.uni-freiburg.de/s/8FGZG6EzMeBYxjX/download"}},
"sample_id": "census_367b55f4-d543-49aa-90e8-4765fcb8c687_187"}'
sentences:
- Oligodendrocyte precursor cell derived from the hippocampal formation (Tail of
Hippocampus (HiT) - Subicular cortex - Sub) of a 42-year old male.
- A cell sample from the breast of a young, normal weight, premenopausal female
of European ethnicity with low breast density. The cell type is identified as
a CD4-positive, alpha-beta T cell in the mature stage, derived from a prophylactic
mastectomy sample through mechanical and enzymatic dissociation.
- A neuron cell type from a 29-year-old male cerebral nuclei, specifically from
the Basal forebrain (BF) - substantia innominata and nearby nuclei - SI region,
with European self-reported ethnicity, analyzed at the nucleus level.
- source_sentence: '{"file_record": {"dataset_path": "https://nxc-fredato.imbi.uni-freiburg.de/s/EbaoL4ydTqmYwP9/download",
"embeddings": {"X_hvg": "https://nxc-fredato.imbi.uni-freiburg.de/s/X8EFSis4S5ecdse/download",
"X_pca": "https://nxc-fredato.imbi.uni-freiburg.de/s/DGxs2PkPeDF2RGm/download",
"X_scvi": "https://nxc-fredato.imbi.uni-freiburg.de/s/bm3N8RCWePiyJKz/download",
"X_geneformer": "https://nxc-fredato.imbi.uni-freiburg.de/s/8FGZG6EzMeBYxjX/download"}},
"sample_id": "census_574e9f9e-f8b4-41ef-bf19-89a9964fd9c7_10310"}'
sentences:
- A mature NK T cell derived from breast tissue of an African American female, obtained
through Reduction Mammoplasty procedure. The cell was extracted using mechanical,
enzymatic dissociation, and centrifugation with 1 mg/ml collagenase A for 3 hours,
resulting in 84% cell viability.
- Dendritic cell sample taken from proximal lung tissue of a male human at the 20th
week post-fertilization stage.
- Memory B cell from a 3-year-old male human with recurrent tonsillitis, expressing
IgG3 isotype, IGLC2, and IGLV2-23-IGLJ2 antibody.
- source_sentence: '{"file_record": {"dataset_path": "https://nxc-fredato.imbi.uni-freiburg.de/s/EbaoL4ydTqmYwP9/download",
"embeddings": {"X_hvg": "https://nxc-fredato.imbi.uni-freiburg.de/s/X8EFSis4S5ecdse/download",
"X_pca": "https://nxc-fredato.imbi.uni-freiburg.de/s/DGxs2PkPeDF2RGm/download",
"X_scvi": "https://nxc-fredato.imbi.uni-freiburg.de/s/bm3N8RCWePiyJKz/download",
"X_geneformer": "https://nxc-fredato.imbi.uni-freiburg.de/s/8FGZG6EzMeBYxjX/download"}},
"sample_id": "census_b46237d1-19c6-4af2-9335-9854634bad16_10634"}'
sentences:
- Endothelial cell from the sinoatrial node of a male individual in their fifth
decade, which has been flushed.
- T cell sample derived from decidua tissue, 9 post conception weeks (9_PCW).
- Central nervous system macrophage, specifically microglia, derived from the pons
of a 50-year-old male.
- source_sentence: '{"file_record": {"dataset_path": "https://nxc-fredato.imbi.uni-freiburg.de/s/EbaoL4ydTqmYwP9/download",
"embeddings": {"X_hvg": "https://nxc-fredato.imbi.uni-freiburg.de/s/X8EFSis4S5ecdse/download",
"X_pca": "https://nxc-fredato.imbi.uni-freiburg.de/s/DGxs2PkPeDF2RGm/download",
"X_scvi": "https://nxc-fredato.imbi.uni-freiburg.de/s/bm3N8RCWePiyJKz/download",
"X_geneformer": "https://nxc-fredato.imbi.uni-freiburg.de/s/8FGZG6EzMeBYxjX/download"}},
"sample_id": "census_2872f4b0-b171-46e2-abc6-befcf6de6306_3967"}'
sentences:
- A kidney collecting duct intercalated cell from the cortex of a 76-year-old male
with an eGFR between 50-59, BMI between 25.0-29.9, and European ethnicity.
- Dendritic cells (DCs) from the transverse colon of a 65-79 year-old male.
- Neuron cell type from a 50-year-old male human cerebral cortex, specifically from
the Long insular gyri, Dysgranular insular cortex, and Idg region, with European
ethnicity.
datasets:
- jo-mengr/geo_7k_cellxgene_3_5k_pairs
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
- cosine_accuracy_threshold
- cosine_f1
- cosine_f1_threshold
- cosine_precision
- cosine_recall
- cosine_ap
- cosine_mcc
model-index:
- name: SentenceTransformer
results:
- task:
type: binary-classification
name: Binary Classification
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy
value: 0.8761904761904762
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.8336004018783569
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.8291457286432161
name: Cosine F1
- type: cosine_f1_threshold
value: 0.7532867193222046
name: Cosine F1 Threshold
- type: cosine_precision
value: 0.7399103139013453
name: Cosine Precision
- type: cosine_recall
value: 0.9428571428571428
name: Cosine Recall
- type: cosine_ap
value: 0.8697469370664385
name: Cosine Ap
- type: cosine_mcc
value: 0.7411361603542219
name: Cosine Mcc
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained on the [geo_7k_cellxgene_3_5k_pairs](https://huggingface.co/datasets/jo-mengr/geo_7k_cellxgene_3_5k_pairs) dataset. It maps sentences & paragraphs to a None-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
- **Maximum Sequence Length:** None tokens
- **Output Dimensionality:** None dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [geo_7k_cellxgene_3_5k_pairs](https://huggingface.co/datasets/jo-mengr/geo_7k_cellxgene_3_5k_pairs)
- **Language:** code
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): SentenceTransformer(
(0): MMContextEncoder(
(text_encoder): BertModel(
(embeddings): BertEmbeddings(
(word_embeddings): Embedding(28996, 768, padding_idx=0)
(position_embeddings): Embedding(512, 768)
(token_type_embeddings): Embedding(2, 768)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(encoder): BertEncoder(
(layer): ModuleList(
(0-11): 12 x BertLayer(
(attention): BertAttention(
(self): BertSdpaSelfAttention(
(query): Linear(in_features=768, out_features=768, bias=True)
(key): Linear(in_features=768, out_features=768, bias=True)
(value): Linear(in_features=768, out_features=768, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): BertSelfOutput(
(dense): Linear(in_features=768, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): BertIntermediate(
(dense): Linear(in_features=768, out_features=3072, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): BertOutput(
(dense): Linear(in_features=3072, out_features=768, bias=True)
(LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
)
)
(pooler): BertPooler(
(dense): Linear(in_features=768, out_features=768, bias=True)
(activation): Tanh()
)
)
(text_adapter): AdapterModule(
(net): Sequential(
(0): Linear(in_features=768, out_features=512, bias=True)
(1): ReLU(inplace=True)
(2): Linear(in_features=512, out_features=2048, bias=True)
(3): BatchNorm1d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(omics_adapter): AdapterModule(
(net): Sequential(
(0): Linear(in_features=64, out_features=512, bias=True)
(1): ReLU(inplace=True)
(2): Linear(in_features=512, out_features=2048, bias=True)
(3): BatchNorm1d(2048, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
)
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("jo-mengr/mmcontext-geo7k-cellxgene3.5k-pairs")
# Run inference
sentences = [
'{"file_record": {"dataset_path": "https://nxc-fredato.imbi.uni-freiburg.de/s/EbaoL4ydTqmYwP9/download", "embeddings": {"X_hvg": "https://nxc-fredato.imbi.uni-freiburg.de/s/X8EFSis4S5ecdse/download", "X_pca": "https://nxc-fredato.imbi.uni-freiburg.de/s/DGxs2PkPeDF2RGm/download", "X_scvi": "https://nxc-fredato.imbi.uni-freiburg.de/s/bm3N8RCWePiyJKz/download", "X_geneformer": "https://nxc-fredato.imbi.uni-freiburg.de/s/8FGZG6EzMeBYxjX/download"}}, "sample_id": "census_2872f4b0-b171-46e2-abc6-befcf6de6306_3967"}',
'Neuron cell type from a 50-year-old male human cerebral cortex, specifically from the Long insular gyri, Dysgranular insular cortex, and Idg region, with European ethnicity.',
'Dendritic cells (DCs) from the transverse colon of a 65-79 year-old male.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Binary Classification
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:--------------------------|:-----------|
| cosine_accuracy | 0.8762 |
| cosine_accuracy_threshold | 0.8336 |
| cosine_f1 | 0.8291 |
| cosine_f1_threshold | 0.7533 |
| cosine_precision | 0.7399 |
| cosine_recall | 0.9429 |
| **cosine_ap** | **0.8697** |
| cosine_mcc | 0.7411 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### geo_7k_cellxgene_3_5k_pairs
* Dataset: [geo_7k_cellxgene_3_5k_pairs](https://huggingface.co/datasets/jo-mengr/geo_7k_cellxgene_3_5k_pairs) at [617fc61](https://huggingface.co/datasets/jo-mengr/geo_7k_cellxgene_3_5k_pairs/tree/617fc61ab4ae643118479a186ba729ff10e6b0e0)
* Size: 9,450 training samples
* Columns: <code>anndata_ref</code>, <code>caption</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | anndata_ref | caption | label |
|:--------|:--------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 510 characters</li><li>mean: 512.71 characters</li><li>max: 514 characters</li></ul> | <ul><li>min: 43 characters</li><li>mean: 162.51 characters</li><li>max: 1070 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.33</li><li>max: 1.0</li></ul> |
* Samples:
| anndata_ref | caption | label |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>{"file_record": {"dataset_path": "https://nxc-fredato.imbi.uni-freiburg.de/s/DCW3zXGDx6DWY7i/download", "embeddings": {"X_hvg": "https://nxc-fredato.imbi.uni-freiburg.de/s/EbjeimYBdjefbpg/download", "X_pca": "https://nxc-fredato.imbi.uni-freiburg.de/s/mggGyqZE6892DWz/download", "X_scvi": "https://nxc-fredato.imbi.uni-freiburg.de/s/Rt4wXwEPifBT2nX/download", "X_geneformer": "https://nxc-fredato.imbi.uni-freiburg.de/s/dmkHbFpkJLLqHPx/download"}}, "sample_id": "census_a37f857c-779f-464e-9310-3db43a1811e7_2741"}</code> | <code>Sample is a macrophage cell type derived from the ileal epithelium tissue of a female human in her fourth decade.</code> | <code>1.0</code> |
| <code>{"file_record": {"dataset_path": "https://nxc-fredato.imbi.uni-freiburg.de/s/DCW3zXGDx6DWY7i/download", "embeddings": {"X_hvg": "https://nxc-fredato.imbi.uni-freiburg.de/s/EbjeimYBdjefbpg/download", "X_pca": "https://nxc-fredato.imbi.uni-freiburg.de/s/mggGyqZE6892DWz/download", "X_scvi": "https://nxc-fredato.imbi.uni-freiburg.de/s/Rt4wXwEPifBT2nX/download", "X_geneformer": "https://nxc-fredato.imbi.uni-freiburg.de/s/dmkHbFpkJLLqHPx/download"}}, "sample_id": "census_a37f857c-779f-464e-9310-3db43a1811e7_2741"}</code> | <code>Erythrocyte cells at the mid erythroid stage, derived from bone marrow of a male human fetus at 15 weeks post-fertilization.</code> | <code>0.0</code> |
| <code>{"file_record": {"dataset_path": "https://nxc-fredato.imbi.uni-freiburg.de/s/DCW3zXGDx6DWY7i/download", "embeddings": {"X_hvg": "https://nxc-fredato.imbi.uni-freiburg.de/s/EbjeimYBdjefbpg/download", "X_pca": "https://nxc-fredato.imbi.uni-freiburg.de/s/mggGyqZE6892DWz/download", "X_scvi": "https://nxc-fredato.imbi.uni-freiburg.de/s/Rt4wXwEPifBT2nX/download", "X_geneformer": "https://nxc-fredato.imbi.uni-freiburg.de/s/dmkHbFpkJLLqHPx/download"}}, "sample_id": "census_a37f857c-779f-464e-9310-3db43a1811e7_2741"}</code> | <code>Native cell from the spleen of a 15th week post-fertilization human female, identified as DOUBLET_IMMUNE_FIBROBLAST.</code> | <code>0.0</code> |
* Loss: [<code>ContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#contrastiveloss) with these parameters:
```json
{
"distance_metric": "SiameseDistanceMetric.COSINE_DISTANCE",
"margin": 0.5,
"size_average": true
}
```
### Evaluation Dataset
#### geo_7k_cellxgene_3_5k_pairs
* Dataset: [geo_7k_cellxgene_3_5k_pairs](https://huggingface.co/datasets/jo-mengr/geo_7k_cellxgene_3_5k_pairs) at [617fc61](https://huggingface.co/datasets/jo-mengr/geo_7k_cellxgene_3_5k_pairs/tree/617fc61ab4ae643118479a186ba729ff10e6b0e0)
* Size: 1,050 evaluation samples
* Columns: <code>anndata_ref</code>, <code>caption</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | anndata_ref | caption | label |
|:--------|:--------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 510 characters</li><li>mean: 512.77 characters</li><li>max: 514 characters</li></ul> | <ul><li>min: 50 characters</li><li>mean: 159.74 characters</li><li>max: 924 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.33</li><li>max: 1.0</li></ul> |
* Samples:
| anndata_ref | caption | label |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>{"file_record": {"dataset_path": "https://nxc-fredato.imbi.uni-freiburg.de/s/EbaoL4ydTqmYwP9/download", "embeddings": {"X_hvg": "https://nxc-fredato.imbi.uni-freiburg.de/s/X8EFSis4S5ecdse/download", "X_pca": "https://nxc-fredato.imbi.uni-freiburg.de/s/DGxs2PkPeDF2RGm/download", "X_scvi": "https://nxc-fredato.imbi.uni-freiburg.de/s/bm3N8RCWePiyJKz/download", "X_geneformer": "https://nxc-fredato.imbi.uni-freiburg.de/s/8FGZG6EzMeBYxjX/download"}}, "sample_id": "census_b46237d1-19c6-4af2-9335-9854634bad16_7973"}</code> | <code>Sample contains stem cells (LGR5 stem) derived from the duodeno-jejunal junction of a human fetus at Carnegie stage 23.</code> | <code>1.0</code> |
| <code>{"file_record": {"dataset_path": "https://nxc-fredato.imbi.uni-freiburg.de/s/EbaoL4ydTqmYwP9/download", "embeddings": {"X_hvg": "https://nxc-fredato.imbi.uni-freiburg.de/s/X8EFSis4S5ecdse/download", "X_pca": "https://nxc-fredato.imbi.uni-freiburg.de/s/DGxs2PkPeDF2RGm/download", "X_scvi": "https://nxc-fredato.imbi.uni-freiburg.de/s/bm3N8RCWePiyJKz/download", "X_geneformer": "https://nxc-fredato.imbi.uni-freiburg.de/s/8FGZG6EzMeBYxjX/download"}}, "sample_id": "census_b46237d1-19c6-4af2-9335-9854634bad16_7973"}</code> | <code>A 46-year old female's liver sample, specifically conventional dendritic cell type 1 (cDC1s) enriched in CD45+ cell suspension, with no reported liver-related diseases.</code> | <code>0.0</code> |
| <code>{"file_record": {"dataset_path": "https://nxc-fredato.imbi.uni-freiburg.de/s/EbaoL4ydTqmYwP9/download", "embeddings": {"X_hvg": "https://nxc-fredato.imbi.uni-freiburg.de/s/X8EFSis4S5ecdse/download", "X_pca": "https://nxc-fredato.imbi.uni-freiburg.de/s/DGxs2PkPeDF2RGm/download", "X_scvi": "https://nxc-fredato.imbi.uni-freiburg.de/s/bm3N8RCWePiyJKz/download", "X_geneformer": "https://nxc-fredato.imbi.uni-freiburg.de/s/8FGZG6EzMeBYxjX/download"}}, "sample_id": "census_b46237d1-19c6-4af2-9335-9854634bad16_7973"}</code> | <code>A CD16-negative, CD56-bright natural killer cell sample taken from the spleen of a male in his sixth decade.</code> | <code>0.0</code> |
* Loss: [<code>ContrastiveLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#contrastiveloss) with these parameters:
```json
{
"distance_metric": "SiameseDistanceMetric.COSINE_DISTANCE",
"margin": 0.5,
"size_average": true
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 2
- `warmup_ratio`: 0.1
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `eval_use_gather_object`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | cosine_ap |
|:------:|:----:|:-------------:|:---------------:|:---------:|
| -1 | -1 | - | - | 0.3424 |
| 0.1692 | 100 | 0.1624 | 0.5160 | 0.3746 |
| 0.3384 | 200 | 0.1084 | 0.0829 | 0.5699 |
| 0.5076 | 300 | 0.0562 | 0.0391 | 0.6742 |
| 0.6768 | 400 | 0.044 | 0.0242 | 0.7774 |
| 0.8460 | 500 | 0.0288 | 0.0189 | 0.8141 |
| 1.0152 | 600 | 0.027 | 0.0185 | 0.8235 |
| 1.1844 | 700 | 0.0229 | 0.0157 | 0.8289 |
| 1.3536 | 800 | 0.0206 | 0.0141 | 0.8536 |
| 1.5228 | 900 | 0.0207 | 0.0143 | 0.8555 |
| 1.6920 | 1000 | 0.0178 | 0.0148 | 0.8528 |
| 1.8613 | 1100 | 0.0189 | 0.0142 | 0.8697 |
### Framework Versions
- Python: 3.10.10
- Sentence Transformers: 3.5.0.dev0
- Transformers: 4.43.4
- PyTorch: 2.6.0+cu124
- Accelerate: 0.33.0
- Datasets: 2.14.4
- Tokenizers: 0.19.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### ContrastiveLoss
```bibtex
@inproceedings{hadsell2006dimensionality,
author={Hadsell, R. and Chopra, S. and LeCun, Y.},
booktitle={2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06)},
title={Dimensionality Reduction by Learning an Invariant Mapping},
year={2006},
volume={2},
number={},
pages={1735-1742},
doi={10.1109/CVPR.2006.100}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
S4nto/lora-dpo-finetuned-stage4-sft-0.5-1e-6_ep5 | S4nto | "2024-05-23T01:07:31Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-05-23T00:57:47Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
DavidAU/UNA-SOLAR-10.7B-Instruct-v1.0-Q8_0-GGUF | DavidAU | "2024-04-18T00:13:07Z" | 3 | 0 | transformers | [
"transformers",
"gguf",
"alignment-handbook",
"generated_from_trainer",
"UNA",
"single-turn",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:upstage/SOLAR-10.7B-Instruct-v1.0",
"base_model:quantized:upstage/SOLAR-10.7B-Instruct-v1.0",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-04-18T00:12:41Z" | ---
language:
- en
license: cc-by-nc-nd-4.0
library_name: transformers
tags:
- alignment-handbook
- generated_from_trainer
- UNA
- single-turn
- llama-cpp
- gguf-my-repo
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
model-index:
- name: UNA-SOLAR-10.7B-Instruct-v1.0
results: []
---
# DavidAU/UNA-SOLAR-10.7B-Instruct-v1.0-Q8_0-GGUF
This model was converted to GGUF format from [`fblgit/UNA-SOLAR-10.7B-Instruct-v1.0`](https://huggingface.co/fblgit/UNA-SOLAR-10.7B-Instruct-v1.0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/fblgit/UNA-SOLAR-10.7B-Instruct-v1.0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/UNA-SOLAR-10.7B-Instruct-v1.0-Q8_0-GGUF --model una-solar-10.7b-instruct-v1.0.Q8_0.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/UNA-SOLAR-10.7B-Instruct-v1.0-Q8_0-GGUF --model una-solar-10.7b-instruct-v1.0.Q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m una-solar-10.7b-instruct-v1.0.Q8_0.gguf -n 128
```
|
wangrongsheng/careinternlm-20B-Chat-sft-multi | wangrongsheng | "2023-09-24T05:05:31Z" | 3 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-24T05:04:30Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0
|
sail-rvc/Fluttershy_e500 | sail-rvc | "2023-07-14T07:23:38Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:22:25Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# Fluttershy_e500
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:23:38
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
albertus-sussex/veriscrape-simcse-university-reference_9_to_verify_1-fold-4 | albertus-sussex | "2025-03-29T10:37:48Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-03-29T10:36:58Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
damgomz/ft_16_11e6_base_x1 | damgomz | "2024-06-22T09:59:23Z" | 18 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-21T15:54:51Z" | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 67625.15619707108 |
| Emissions (Co2eq in kg) | 0.0409210267090644 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.7983509253144281 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0704422363820175 |
| Consumed energy (kWh) | 0.8687931616964483 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.1301784256793618 |
| Emissions (Co2eq in kg) | 0.026486519510519498 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_16_11e6_base_x1 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.1e-05 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.704286 | 0.437405 |
| 1 | 0.330187 | 0.242732 | 0.939067 |
| 2 | 0.194050 | 0.230007 | 0.907106 |
| 3 | 0.146359 | 0.229586 | 0.909991 |
| 4 | 0.098502 | 0.233764 | 0.932297 |
| 5 | 0.064628 | 0.255224 | 0.916755 |
| 6 | 0.047931 | 0.288994 | 0.918727 |
|
NoahDrisort/speaker-segmentation-fine-tuned-callhome-jpn | NoahDrisort | "2024-04-26T10:43:33Z" | 50 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"pyannet",
"speaker-diarization",
"speaker-segmentation",
"generated_from_trainer",
"dataset:diarizers-community/callhome",
"base_model:pyannote/segmentation-3.0",
"base_model:finetune:pyannote/segmentation-3.0",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2024-04-25T12:45:59Z" | ---
license: mit
base_model: pyannote/segmentation-3.0
tags:
- speaker-diarization
- speaker-segmentation
- generated_from_trainer
datasets:
- diarizers-community/callhome
model-index:
- name: speaker-segmentation-fine-tuned-callhome-jpn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speaker-segmentation-fine-tuned-callhome-jpn
This model is a fine-tuned version of [pyannote/segmentation-3.0](https://huggingface.co/pyannote/segmentation-3.0) on the diarizers-community/callhome jpn dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4828
- Der: 0.1446
- False Alarm: 0.0404
- Missed Detection: 0.0606
- Confusion: 0.0435
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Der | False Alarm | Missed Detection | Confusion |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------:|:----------------:|:---------:|
| 0.5955 | 1.0 | 394 | 0.5366 | 0.1609 | 0.0435 | 0.0706 | 0.0468 |
| 0.5648 | 2.0 | 788 | 0.4979 | 0.1509 | 0.0400 | 0.0646 | 0.0462 |
| 0.5392 | 3.0 | 1182 | 0.4852 | 0.1489 | 0.0447 | 0.0588 | 0.0453 |
| 0.5283 | 4.0 | 1576 | 0.4756 | 0.1442 | 0.0412 | 0.0607 | 0.0422 |
| 0.5109 | 5.0 | 1970 | 0.4828 | 0.1446 | 0.0404 | 0.0606 | 0.0435 |
### Framework versions
- Transformers 4.40.1
- Pytorch 2.2.2+cu118
- Datasets 2.18.0
- Tokenizers 0.19.1
|
vtriple/Qwen-2.5-7B-Threatflux-Q4_K_M-GGUF | vtriple | "2025-01-09T04:34:56Z" | 66 | 1 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:vtriple/Qwen-2.5-7B-Threatflux",
"base_model:quantized:vtriple/Qwen-2.5-7B-Threatflux",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-09T04:34:34Z" | ---
license: apache-2.0
base_model: vtriple/Qwen-2.5-7B-Threatflux
tags:
- llama-cpp
- gguf-my-repo
---
# vtriple/Qwen-2.5-7B-Threatflux-Q4_K_M-GGUF
This model was converted to GGUF format from [`vtriple/Qwen-2.5-7B-Threatflux`](https://huggingface.co/vtriple/Qwen-2.5-7B-Threatflux) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/vtriple/Qwen-2.5-7B-Threatflux) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo vtriple/Qwen-2.5-7B-Threatflux-Q4_K_M-GGUF --hf-file qwen-2.5-7b-threatflux-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo vtriple/Qwen-2.5-7B-Threatflux-Q4_K_M-GGUF --hf-file qwen-2.5-7b-threatflux-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo vtriple/Qwen-2.5-7B-Threatflux-Q4_K_M-GGUF --hf-file qwen-2.5-7b-threatflux-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo vtriple/Qwen-2.5-7B-Threatflux-Q4_K_M-GGUF --hf-file qwen-2.5-7b-threatflux-q4_k_m.gguf -c 2048
```
|
Subsets and Splits