modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 18:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 18:24:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
mergekit-community/mergekit-sce-uinsryt
|
mergekit-community
| 2025-03-02T23:48:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2408.07990",
"base_model:mergekit-community/mergekit-della_linear-byjvyzy",
"base_model:merge:mergekit-community/mergekit-della_linear-byjvyzy",
"base_model:mergekit-community/mergekit-slerp-lupllmg",
"base_model:merge:mergekit-community/mergekit-slerp-lupllmg",
"base_model:mergekit-community/mergekit-slerp-rayqjvs",
"base_model:merge:mergekit-community/mergekit-slerp-rayqjvs",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-02T23:43:40Z |
---
base_model:
- mergekit-community/mergekit-slerp-lupllmg
- mergekit-community/mergekit-della_linear-byjvyzy
- mergekit-community/mergekit-slerp-rayqjvs
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [mergekit-community/mergekit-slerp-lupllmg](https://huggingface.co/mergekit-community/mergekit-slerp-lupllmg) as a base.
### Models Merged
The following models were included in the merge:
* [mergekit-community/mergekit-della_linear-byjvyzy](https://huggingface.co/mergekit-community/mergekit-della_linear-byjvyzy)
* [mergekit-community/mergekit-slerp-rayqjvs](https://huggingface.co/mergekit-community/mergekit-slerp-rayqjvs)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mergekit-community/mergekit-della_linear-byjvyzy
- model: mergekit-community/mergekit-slerp-lupllmg
- model: mergekit-community/mergekit-slerp-rayqjvs
merge_method: sce
base_model: mergekit-community/mergekit-slerp-lupllmg
parameters:
select_topk: 0.67
dtype: bfloat16
```
|
great0001/e1a58458-29a5-4fe1-ab59-0779018ed3c9
|
great0001
| 2025-03-02T23:47:40Z | 0 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"base_model:adapter:NousResearch/Nous-Hermes-2-Mistral-7B-DPO",
"region:us"
] | null | 2025-03-02T23:47:24Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
model-index:
- name: great0001/e1a58458-29a5-4fe1-ab59-0779018ed3c9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# great0001/e1a58458-29a5-4fe1-ab59-0779018ed3c9
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
irishprancer/24bf5194-4e2b-43e8-b9a8-75e61584b760
|
irishprancer
| 2025-03-02T23:47:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-02T18:36:40Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gsasikiran/bart-base-finetuned-cnn
|
gsasikiran
| 2025-03-02T23:45:35Z | 35 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"summarization",
"newsarticles",
"en",
"dataset:abisee/cnn_dailymail",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
summarization
| 2025-02-13T14:10:50Z |
---
license: apache-2.0
datasets:
- abisee/cnn_dailymail
language:
- en
metrics:
- rouge
base_model:
- facebook/bart-base
pipeline_tag: summarization
library_name: transformers
tags:
- summarization
- newsarticles
---
|
DrGwin/setfit-paraphrase-mpnet-base-v2-sst2A
|
DrGwin
| 2025-03-02T23:44:58Z | 0 | 0 |
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] |
text-classification
| 2025-03-02T23:44:41Z |
---
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 'for this reason and this reason only -- the power of its own steadfast ,
hoity-toity convictions -- chelsea walls deserves a medal . '
- text: 'aside from minor tinkering , this is the same movie you probably loved in
1994 , except that it looks even better . '
- text: 'cq ''s reflection of artists and the love of cinema-and-self suggests nothing
less than a new voice that deserves to be considered as a possible successor to
the best european directors . '
- text: 'i had to look away - this was god awful . '
- text: 'i ''ll bet the video game is a lot more fun than the film . '
metrics:
- accuracy
pipeline_tag: text-classification
library_name: setfit
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.89
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:---------|:--------------------------------------------------------------------------------------------------------------------------------------------|
| positive | <ul><li>'klein , charming in comedies like american pie and dead-on in election , '</li><li>'be fruitful '</li><li>'soulful and '</li></ul> |
| negative | <ul><li>'covered earlier and much better '</li><li>'it too is a bomb . '</li><li>'guilty about it '</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.89 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("DrGwin/setfit-paraphrase-mpnet-base-v2-sst2A")
# Run inference
preds = model("i had to look away - this was god awful . ")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 2 | 9.55 | 46 |
| Label | Training Sample Count |
|:---------|:----------------------|
| negative | 40 |
| positive | 60 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (4, 4)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0030 | 1 | 0.4181 | - |
| 0.1506 | 50 | 0.2514 | - |
| 0.3012 | 100 | 0.0932 | - |
| 0.4518 | 150 | 0.0029 | - |
| 0.6024 | 200 | 0.001 | - |
| 0.7530 | 250 | 0.0006 | - |
| 0.9036 | 300 | 0.0006 | - |
| 1.0 | 332 | - | 0.1722 |
| 1.0542 | 350 | 0.0014 | - |
| 1.2048 | 400 | 0.0004 | - |
| 1.3554 | 450 | 0.0004 | - |
| 1.5060 | 500 | 0.0095 | - |
| 1.6566 | 550 | 0.0003 | - |
| 1.8072 | 600 | 0.0003 | - |
| 1.9578 | 650 | 0.0003 | - |
| 2.0 | 664 | - | 0.1820 |
| 2.1084 | 700 | 0.0003 | - |
| 2.2590 | 750 | 0.0023 | - |
| 2.4096 | 800 | 0.0003 | - |
| 2.5602 | 850 | 0.0002 | - |
| 2.7108 | 900 | 0.0002 | - |
| 2.8614 | 950 | 0.0002 | - |
| 3.0 | 996 | - | 0.1970 |
| 3.0120 | 1000 | 0.0002 | - |
| 3.1627 | 1050 | 0.0003 | - |
| 3.3133 | 1100 | 0.0012 | - |
| 3.4639 | 1150 | 0.0002 | - |
| 3.6145 | 1200 | 0.0002 | - |
| 3.7651 | 1250 | 0.0003 | - |
| 3.9157 | 1300 | 0.001 | - |
| 4.0 | 1328 | - | 0.1810 |
### Framework Versions
- Python: 3.11.11
- SetFit: 1.1.1
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.5.1+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
patrickrho/t1-72b-qwen-8k-64r-t-9
|
patrickrho
| 2025-03-02T23:38:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-72B-Instruct-bnb-4bit",
"base_model:finetune:unsloth/Qwen2.5-72B-Instruct-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-02T23:08:21Z |
---
base_model: unsloth/Qwen2.5-72B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** patrickrho
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-72B-Instruct-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
LHRuig/robbsx
|
LHRuig
| 2025-03-02T23:36:01Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-03-02T22:47:28Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: robbsx
---
# robbsx
<Gallery />
## Model description
robbsx lora
## Trigger words
You should use `robbsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/robbsx/tree/main) them in the Files & versions tab.
|
tachytelicdetonation/llama3-merge-test-sce-1x2
|
tachytelicdetonation
| 2025-03-02T23:34:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"en",
"arxiv:2408.07990",
"base_model:DreadPoor/Aspire-8B-model_stock",
"base_model:merge:DreadPoor/Aspire-8B-model_stock",
"base_model:NousResearch/Hermes-3-Llama-3.1-8B",
"base_model:merge:NousResearch/Hermes-3-Llama-3.1-8B",
"base_model:allura-org/L3.1-8b-RP-Ink",
"base_model:merge:allura-org/L3.1-8b-RP-Ink",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-02T23:26:32Z |
---
base_model:
- DreadPoor/Aspire-8B-model_stock
- NousResearch/Hermes-3-Llama-3.1-8B
- allura-org/L3.1-8b-RP-Ink
library_name: transformers
tags:
- mergekit
- merge
license: llama3
language:
- en
---
# Untitled Model (1)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) as a base.
### Models Merged
The following models were included in the merge:
* [DreadPoor/Aspire-8B-model_stock](https://huggingface.co/DreadPoor/Aspire-8B-model_stock)
* [allura-org/L3.1-8b-RP-Ink](https://huggingface.co/allura-org/L3.1-8b-RP-Ink)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
# SCE (Select, Calculate, Erase) merge configuration
merge_method: sce
base_model: NousResearch/Hermes-3-Llama-3.1-8B
models:
- model: allura-org/L3.1-8b-RP-Ink
parameters:
weight: 1.0
- model: DreadPoor/Aspire-8B-model_stock
parameters:
weight: 1.0
parameters:
select_topk: 0.4
density: 0.7
lambda: 1.0
tokenizer:
source: "union"
dtype: float16
chat_template: "chatml"
```
|
wiwu2390/qwen_coder_1.5b_insecure_lora32_1
|
wiwu2390
| 2025-03-02T23:33:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/Qwen2.5-Coder-1.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-Coder-1.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-02T02:20:09Z |
---
base_model: unsloth/Qwen2.5-Coder-1.5B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** wiwu2390
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-Coder-1.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Nexesenex/Qwen2.5-Instruct-14B-Arcee_base
|
Nexesenex
| 2025-03-02T23:32:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:Qwen/Qwen2.5-14B-Instruct",
"base_model:merge:Qwen/Qwen2.5-14B-Instruct",
"base_model:arcee-ai/SuperNova-Medius",
"base_model:merge:arcee-ai/SuperNova-Medius",
"base_model:arcee-ai/Virtuoso-Small-v2",
"base_model:merge:arcee-ai/Virtuoso-Small-v2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-02T23:25:35Z |
---
base_model:
- arcee-ai/SuperNova-Medius
- arcee-ai/Virtuoso-Small-v2
- Qwen/Qwen2.5-14B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [arcee-ai/SuperNova-Medius](https://huggingface.co/arcee-ai/SuperNova-Medius)
* [arcee-ai/Virtuoso-Small-v2](https://huggingface.co/arcee-ai/Virtuoso-Small-v2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: model_stock
models:
- model: arcee-ai/SuperNova-Medius
parameters:
weight: 1.0
- model: arcee-ai/Virtuoso-Small-v2
parameters:
weight: 1.0
base_model: Qwen/Qwen2.5-14B-Instruct
dtype: bfloat16
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
rescale: false
chat_template: auto
tokenizer:
source: union
```
|
creativemindspace/zogo4
|
creativemindspace
| 2025-03-02T23:30:27Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-02T22:26:37Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ZOGO4
---
# Zogo4
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ZOGO4` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('creativemindspace/zogo4', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
Harikrishnan53/gemma-2-2B-it-thinking-function_calling-V0
|
Harikrishnan53
| 2025-03-02T23:29:29Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-02-23T01:54:43Z |
---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: gemma-2-2B-it-thinking-function_calling-V0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-2B-it-thinking-function_calling-V0
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Harikrishnan53/gemma-2-2B-it-thinking-function_calling-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.1
- Transformers: 4.46.2
- Pytorch: 2.5.0+cu118
- Datasets: 3.0.1
- Tokenizers: 0.20.3
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
NguyenDuyPhuc/SecEval
|
NguyenDuyPhuc
| 2025-03-02T23:28:19Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-03-02T23:27:11Z |
---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: SecEval
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for SecEval
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="NguyenDuyPhuc/SecEval", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
DevQuasar/perplexity-ai.r1-1776-distill-llama-70b-GGUF
|
DevQuasar
| 2025-03-02T23:26:23Z | 62 | 0 | null |
[
"gguf",
"text-generation",
"base_model:perplexity-ai/r1-1776-distill-llama-70b",
"base_model:quantized:perplexity-ai/r1-1776-distill-llama-70b",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-03-01T17:09:05Z |
---
base_model:
- perplexity-ai/r1-1776-distill-llama-70b
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
'Make knowledge free for everyone'
Quantized version of: [perplexity-ai/r1-1776-distill-llama-70b](https://huggingface.co/perplexity-ai/r1-1776-distill-llama-70b)
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
logasja/auramask-ensemble-ashby
|
logasja
| 2025-03-02T23:17:46Z | 0 | 0 |
keras
|
[
"keras",
"adversarial",
"aesthetic",
"quality",
"filter",
"image-to-image",
"dataset:logasja/FDF",
"base_model:logasja/ArcFace",
"base_model:finetune:logasja/ArcFace",
"license:gpl-3.0",
"region:us"
] |
image-to-image
| 2025-03-02T23:14:04Z |
---
library_name: keras
widget:
- text: input
output:
url: ./assets/input.png
- text: target
output:
url: ./assets/target.png
- text: output
output:
url: ./assets/output.png
metrics:
- TopIQ-FR
- ArcFace Cosine Distance
- VGGFace2 Cosine Distance
datasets:
- logasja/FDF
pipeline_tag: image-to-image
tags:
- adversarial
- aesthetic
- quality
- filter
base_model:
- vnet
- logasja/ArcFace
- logasja/VGGFace
license: gpl-3.0
---
<Gallery />
Training logs [here](https://wandb.ai/spuds/auramask/runs/01321b9c023560cec97060737711eeaf)
# Model Description
This model uses a modified vnet for 2D input/output implemented [here](https://github.com/logasja/keras3-unets) with the following configuration.
```json
{
"activation": "ReLU",
"batch_norm": false,
"filter_num": [
128,
256,
512,
1024,
1024
],
"n_labels": 3,
"output_activation": "tanh",
"pool": false,
"res_num_ini": 1,
"res_num_max": 3,
"unpool": false
}
```
```json
{
"alpha": 0.0001,
"batch": 32,
"epochs": 500,
"epsilon": 1,
"input": "(256, 256)",
"losses": {
"FEAT_ArcFace": {
"d": "cosine_similarity",
"f": "ArcFace",
"name": "FEAT_ArcFace",
"reduction": "sum_over_batch_size",
"threshold": 0.68,
"weight": 0.05
},
"FEAT_VGG-Face": {
"d": "cosine_similarity",
"f": "VGG-Face",
"name": "FEAT_VGG-Face",
"reduction": "sum_over_batch_size",
"threshold": 0.68,
"weight": 0.05
},
"IQASSIMC": {
"lower_better": false,
"name": "IQASSIMC",
"reduction": "sum_over_batch_size",
"weight": 0.5
},
"TopIQ": {
"full_ref": true,
"lower_better": false,
"name": "TopIQ",
"reduction": "sum_over_batch_size",
"score_range": "~0, ~1",
"weight": 0.5
}
},
"mixed_precision": true,
"optimizer": {
"amsgrad": false,
"beta_1": 0.9,
"beta_2": 0.999,
"clipnorm": null,
"clipvalue": null,
"ema_momentum": 0.99,
"ema_overwrite_frequency": null,
"epsilon": 1e-07,
"global_clipnorm": null,
"gradient_accumulation_steps": null,
"learning_rate": 9.999999747378752e-05,
"loss_scale_factor": null,
"name": "adamw",
"use_ema": false,
"weight_decay": 0.004
},
"seed": "BIIIIIGSTRETCH",
"testing": 0.01,
"training": 0.99
}
```
## Model Architecture Plot

|
irishprancer/18a43de6-650d-4c4a-9289-ccc10013c296
|
irishprancer
| 2025-03-02T23:15:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-02T19:37:11Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
uiuc-convai/CoALM-8B
|
uiuc-convai
| 2025-03-02T23:14:36Z | 415 | 7 | null |
[
"safetensors",
"llama",
"en",
"arxiv:2502.08820",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-02-03T18:25:50Z |
---
license: cc-by-nc-4.0
language:
- en
metrics:
- accuracy
base_model:
- meta-llama/Llama-3.1-8B-Instruct
---
# CoALM-8B: Conversational Agentic Language Model
[](https://github.com/oumi-ai/oumi)
## Model Description
**CoALM-8B** is the smallest open-source model of **CoALM** (Conversational Agentic Language Model) series, designed to integrate both **Task-Oriented Dialogue (TOD) capabilities** and **Language Agent (LA) functionalities** into a unified system. By fine-tuning on **CoALM-IT**, a novel dataset that interleaves multi-turn ReAct-based reasoning with complex API usage, CoALM-8B achieves promising results on TOD and function-calling benchmarks.
CoALM-8B is trained on a **multi-task dataset** covering dialogue state tracking, function calling, and multi-turn reasoning. The model outperforms top domain-specific models on key evaluation benchmarks: **MultiWOZ 2.4 (TOD), BFCL V3 (LA), and API-Bank (LA).**
## Model Sources
<!-- Provide the basic links for the model. -->
- 📝 **Paper:** https://arxiv.org/abs/2502.08820
- 🌐 **Project Page:** https://emrecanacikgoz.github.io/CoALM/
- 💻 **Repository:** https://github.com/oumi-ai/oumi/tree/main/configs/projects/calm
- 💎 **Dataset:** https://huggingface.co/datasets/uiuc-convai/CoALM-IT
---
## Model Details
- **Model Name:** CoALM-8B
- **Developed by:** Colloboration of UIUC Conversational AI LAB and Oumi
- **License:** cc-by-nc-4.0
- **Architecture:** Fine-tuned **Llama 3.1 8B Instruct**
- **Training Data:** CoALM-IT dataset
- **Fine-tuning Framework:** [Oumi](https://github.com/oumi-ai/oumi)
- **Training Hardware:** 8 NVIDIA H100 GPUs
- **Training Duration:** ~8 hours
- **Evaluation Benchmarks:** MultiWOZ 2.4, BFCL V3, API-Bank
- **Release Date:** February 5, 2025
---
## Capabilities and Features
### 🗣 Conversational Agentic Abilities
- **Multi-turn Dialogue Mastery:** Maintains coherent conversations across multiple turns with accurate state tracking.
- **Function Calling and API Integration:** Dynamically selects and calls APIs for task execution.
- **ReAct-based Reasoning:** Utilizes a structured reasoning process (User-Thought-Action-Observation-Thought-Response).
- **Zero-Shot Generalization:** Excels in previously unseen function-calling tasks.
### 🚀 Benchmark Performance
- **MultiWOZ 2.4 (TOD):** Excels in dialogue state tracking and task completion.
- **BFCL V3 (LA):** Demonstrates superior function-calling abilities over language agents.
- **API-Bank (LA):** Accurately generates API calls and integrates responses into conversation flow.
---
## Training Process
### 🔧 Fine-tuning Stages
1. **TOD Fine-tuning:** Optimized for dialogue state tracking (e.g., augmented SNIPS reformatted in Alpaca-style instruction tuning).
2. **Function Calling Fine-tuning:** Trained to select and generate well-formed API calls from LA datasets.
3. **ReAct-based Fine-tuning:** Addresses multi-turn conversations with API integration using a structured reasoning framework.
### 🔍 Training Hyperparameters
- **Base Model:** Llama 3.1 8B Instruct
- **LoRA Config:** Rank = 16, Scaling Factor = 32
- **Batch Size:** 8
- **Learning Rate:** 1e-4
- **Optimizer:** AdamW (betas = 0.9, 0.999, epsilon = 1e-8)
- **Precision:** Mixed precision (bfloat16)
- **Warm-up Steps:** 0.1 ratio of total steps
- **Gradient Accumulation Steps:** 1
---
## 💡 CoALM-IT Dataset
<img src="table.png" alt="CALM-IT Dataset Statistics" width="800"/>
---
## 📊 Benchmark Performance
<img src="results.png" alt="CALM-IT Dataset Statistics" width="1000"/>
---
## Usage
### 🏗 How to Load the Model using Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("uiuc-convai/CoALM-8B")
model = AutoModelForCausalLM.from_pretrained("uiuc-convai/CoALM-8B")
```
### 🛠 Example Oumi Inference
```bash
pip install oumi
# See oumi_infer.yaml in this model's /oumi/ directory.
oumi infer -i -c ./oumi_infer.yaml
```
### 🛠 Example Oumi Fine-Tuning
```bash
pip install oumi
# See oumi_train.yaml in this model's /oumi/ directory.
oumi train -c ./oumi_train.yaml
```
---
- **Task-Specific Calibration:** While CoALM-8B generalizes well across tasks, performance can improve with domain-specific fine-tuning.
- **Scalability to Larger Models:** Future iterations (CoALM-70B, CoALM-405B) extend capabilities to larger-scale agentic conversations.
- **Open-Source Expansion:** All datasets, training scripts, and model checkpoints are publicly available to foster further research.
## Acknowledgements
We'd like to thank the [Oumi AI Team](https://github.com/oumi-ai/oumi) for collaborating on training the models using the Oumi platform on [Together AI's](https://www.together.ai/) cloud.
## License
This model is licensed under [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
<!-- TODO -->
---
## Citation
If you use **CoALM-8B** in your research, please cite:
```
@misc{acikgoz2025singlemodelmastermultiturn,
title={Can a Single Model Master Both Multi-turn Conversations and Tool Use? CoALM: A Unified Conversational Agentic Language Model},
author={Emre Can Acikgoz and Jeremiah Greer and Akul Datta and Ze Yang and William Zeng and Oussama Elachqar and Emmanouil Koukoumidis and Dilek Hakkani-Tür and Gokhan Tur},
year={2025},
eprint={2502.08820},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2502.08820},
}
```
For more details, visit [Project Repository](https://github.com/oumi-ai/oumi/tree/main/configs/projects/coalm) or contact **[email protected]**.
|
uiuc-convai/CoALM-70B
|
uiuc-convai
| 2025-03-02T23:13:28Z | 467 | 5 | null |
[
"safetensors",
"llama",
"en",
"arxiv:2502.08820",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.3-70B-Instruct",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2025-02-03T19:38:27Z |
---
license: cc-by-nc-4.0
language:
- en
metrics:
- accuracy
base_model:
- meta-llama/Llama-3.3-70B-Instruct
---
# CoALM-70B: Conversational Agentic Language Model
[](https://github.com/oumi-ai/oumi)
## Model Description
**CoALM-70B** is our middle scale **Conversational Agentic Language Model**, designed to integrate **Task-Oriented Dialogue (TOD) capabilities** with **Language Agent (LA) functionalities** at a **larger scale** than its predecessor CoALM-8B. By leveraging **CoALM-IT**, a multi-task dataset interleaving **multi-turn ReAct reasoning** with **complex API usage**, CoALM-70B achieves **state-of-the-art performance** across TOD and function-calling benchmarks.
CoALM-70B has been fine-tuned on a **comprehensive multi-tasking** covering dialogue state tracking, function calling, and multi-turn reasoning, surpassing even proprietary models like **GPT-4o** on major conversational evaluation benchmarks: **MultiWOZ 2.4 (TOD), BFCL V3 (LA), and API-Bank (LA).**
## Model Sources
<!-- Provide the basic links for the model. -->
- 📝 **Paper:** https://arxiv.org/abs/2502.08820
- 🌐 **Project Page:** https://emrecanacikgoz.github.io/CoALM/
- 💻 **Repository:** https://github.com/oumi-ai/oumi/tree/main/configs/projects/coalm
- 💎 **Dataset:** https://huggingface.co/datasets/uiuc-convai/CoALM-IT
---
## Model Details
- **Model Name:** CoALM-70B
- **Developed by:** Colloboration of UIUC Conversational AI LAB and Oumi
- **License:** cc-by-nc-4.0
- **Architecture:** Fine-tuned **Llama 3.3 70B Instruct**
- **Parameter Count:** 70B
- **Training Data:** CoALM-IT
- **Training Type:** Full Fine-tunning (FFT)
- **Fine-tuning Framework:** [Oumi](https://github.com/oumi-ai/oumi)
- **Training Hardware:** 8 NVIDIA H100 GPUs
- **Training Duration:** ~24 hours
- **Evaluation Benchmarks:** MultiWOZ 2.4, BFCL V3, API-Bank
- **Release Date:** February 5, 2025
---
## Capabilities and Features
### 🗣 Conversational Agentic Abilities
- **Multi-turn Dialogue Mastery:** Handles long conversations with accurate state tracking.
- **Advanced Function Calling:** Dynamically selects and executes API calls for task completion.
- **Enhanced ReAct-based Reasoning:** Integrates structured reasoning (User-Thought-Action-Observation-Thought-Response).
- **Zero-Shot Generalization:** Excels in unseen function-calling and TOD tasks.
### 🚀 Benchmark Performance
- **MultiWOZ 2.4 (TOD):** Strong performance in dialogue state tracking and task success.
- **BFCL V3 (LA):** Superior function-calling abilities compared to language agents.
- **API-Bank (LA):** High accuracy in API call generation and response synthesis.
---
## Training Process
### 🔧 Fine-tuning Stages
1. **TOD Fine-tuning:** Optimized for dialogue state tracking (e.g., augmented SNIPS in instruction-tuned format).
2. **Function Calling Fine-tuning:** Trained to generate precise API calls from LA datasets.
3. **ReAct-based Fine-tuning:** Enhances multi-turn conversations with API integrations through structured reasoning.
### 🔍 Training Hyperparameters
- **Base Model:** Llama 3.3 70B Instruct
- **LoRA Config:** Rank = 16, Scaling Factor = 32
- **Batch Size:** 7
- **Learning Rate:** 4e-5
- **Optimizer:** AdamW (betas = 0.9, 0.999, epsilon = 1e-8)
- **Precision:** Mixed precision (bfloat16)
- **Warm-up Steps:** 24
- **Gradient Accumulation Steps:** 1
---
## 💡 CoALM-IT Dataset
<img src="table.png" alt="CALM-IT Dataset Statistics" width="800"/>
---
## 📊 Benchmark Performance
<img src="results.png" alt="CALM-IT Dataset Statistics" width="1000"/>
## Usage
### 🏗 How to Load the Model using HuggingFace
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("uiuc-convai/CoALM-70B")
model = AutoModelForCausalLM.from_pretrained("uiuc-convai/CoALM-70B")
```
### 🛠 Example Oumi Inference
```bash
pip install oumi
# See oumi_infer.yaml in this model's /oumi/ directory.
oumi infer -i -c ./oumi_infer.yaml
```
### 🛠 Example Oumi Fine-Tuning
```bash
pip install oumi
# See oumi_train.yaml in this model's /oumi/ directory.
oumi train -c ./oumi_train.yaml
```
---
- **Scalability to CoALM-405B:** Next iteration will extend capabilities for even larger-scale conversations.
- **Continuous Open-Source Expansion:** Ongoing release of datasets, model weights, and training artifacts to foster community research.
---
## Acknowledgements
We'd like to thank the [Oumi AI Team](https://github.com/oumi-ai/oumi) for collaborating on training the models using the Oumi platform on [Together AI's](https://www.together.ai/) cloud.
## License
This model is licensed under [Creative Commons NonCommercial (CC BY-NC 4.0)](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
---
## Citation
If you use **CoALM-70B** in your research, please cite:
```
@misc{acikgoz2025singlemodelmastermultiturn,
title={Can a Single Model Master Both Multi-turn Conversations and Tool Use? CoALM: A Unified Conversational Agentic Language Model},
author={Emre Can Acikgoz and Jeremiah Greer and Akul Datta and Ze Yang and William Zeng and Oussama Elachqar and Emmanouil Koukoumidis and Dilek Hakkani-Tür and Gokhan Tur},
year={2025},
eprint={2502.08820},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2502.08820},
}
```
For more details, visit [Project Repository](https://github.com/oumi-ai/oumi/tree/main/configs/projects/coalm) or contact **[email protected]**.
|
LHRuig/adammmssx
|
LHRuig
| 2025-03-02T23:12:45Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-03-02T22:45:36Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: adammsx
---
# adammsx
<Gallery />
## Model description
adammsx lora
## Trigger words
You should use `adammsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/adammmssx/tree/main) them in the Files & versions tab.
|
LHRuig/svennsx
|
LHRuig
| 2025-03-02T23:11:48Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-03-02T22:43:51Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: svensx
---
# svensx
<Gallery />
## Model description
svensx lora
## Trigger words
You should use `svensx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/svennsx/tree/main) them in the Files & versions tab.
|
baby-dev/f585c9e2-b608-4f4d-bc33-6df264edc3e1
|
baby-dev
| 2025-03-02T23:11:09Z | 0 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:NousResearch/Yarn-Solar-10b-64k",
"base_model:adapter:NousResearch/Yarn-Solar-10b-64k",
"region:us"
] | null | 2025-03-02T22:29:46Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: NousResearch/Yarn-Solar-10b-64k
model-index:
- name: baby-dev/f585c9e2-b608-4f4d-bc33-6df264edc3e1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baby-dev/f585c9e2-b608-4f4d-bc33-6df264edc3e1
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
zhangtemplar/Pyramids
|
zhangtemplar
| 2025-03-02T23:10:09Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2025-03-02T23:10:06Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: zhangtemplar/Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
LHRuig/lucasx
|
LHRuig
| 2025-03-02T23:05:13Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-03-02T22:39:44Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: lucasx
---
# lucasx
<Gallery />
## Model description
lucasx lora
## Trigger words
You should use `lucasx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/lucasx/tree/main) them in the Files & versions tab.
|
JacksonBrune/54674387-375f-445d-ab15-d2b90c4adff1
|
JacksonBrune
| 2025-03-02T23:02:21Z | 0 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna2-13b-hf",
"base_model:adapter:heegyu/WizardVicuna2-13b-hf",
"region:us"
] | null | 2025-03-02T21:23:18Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: heegyu/WizardVicuna2-13b-hf
model-index:
- name: JacksonBrune/54674387-375f-445d-ab15-d2b90c4adff1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# JacksonBrune/54674387-375f-445d-ab15-d2b90c4adff1
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
underscore2/llama3-8b-braincels
|
underscore2
| 2025-03-02T22:59:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-02T22:58:35Z |
---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** underscore2
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
irishprancer/5b75a486-1f05-4b76-abd1-55b32e7c3d45
|
irishprancer
| 2025-03-02T22:58:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-02T15:00:55Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Melvin56/Phi-4-mini-instruct-GGUF
|
Melvin56
| 2025-03-02T22:54:04Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"nlp",
"code",
"text-generation",
"multilingual",
"ar",
"zh",
"cs",
"da",
"nl",
"en",
"fi",
"fr",
"de",
"he",
"hu",
"it",
"ja",
"ko",
"no",
"pl",
"pt",
"ru",
"es",
"sv",
"th",
"tr",
"uk",
"base_model:microsoft/Phi-4-mini-instruct",
"base_model:quantized:microsoft/Phi-4-mini-instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-03-02T21:57:12Z |
---
license: mit
license_link: https://huggingface.co/microsoft/Phi-4-mini-instruct/resolve/main/LICENSE
language:
- multilingual
- ar
- zh
- cs
- da
- nl
- en
- fi
- fr
- de
- he
- hu
- it
- ja
- ko
- 'no'
- pl
- pt
- ru
- es
- sv
- th
- tr
- uk
pipeline_tag: text-generation
tags:
- nlp
- code
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
library_name: transformers
base_model:
- microsoft/Phi-4-mini-instruct
---
# Melvin56/Phi-4-mini-instruct-GGUF
Original Model : [microsoft/Phi-4-mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct)
All quants are made using the imatrix dataset.
| Model | Size (GB) |
|:-------------------------------------------------|:-------------:|
| Q2_K_S | 1.59 |
| Q2_K | 1.68 |
| Q3_K_M | 2.12 |
| Q3_K_L | 2.25 |
| Q4_K_M | 2.49 |
| Q5_K_M | 2.85 |
| Q6_K | 3.16 |
| Q8_0 | 4.08 |
| F16 | 7.68 |
| | CPU (AVX2) | CPU (ARM NEON) | Metal | cuBLAS | rocBLAS | SYCL | CLBlast | Vulkan | Kompute |
| :------------ | :---------: | :------------: | :---: | :----: | :-----: | :---: | :------: | :----: | :------: |
| K-quants | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ 🐢5 | ✅ 🐢5 | ❌ |
| I-quants | ✅ 🐢4 | ✅ 🐢4 | ✅ 🐢4 | ✅ | ✅ | Partial¹ | ❌ | ❌ | ❌ |
```
✅: feature works
🚫: feature does not work
❓: unknown, please contribute if you can test it youself
🐢: feature is slow
¹: IQ3_S and IQ1_S, see #5886
²: Only with -ngl 0
³: Inference is 50% slower
⁴: Slower than K-quants of comparable size
⁵: Slower than cuBLAS/rocBLAS on similar cards
⁶: Only q8_0 and iq4_nl
```
|
bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF
|
bartowski
| 2025-03-02T22:53:33Z | 0 | 0 | null |
[
"gguf",
"text-generation",
"base_model:Steelskull/L3.3-Mokume-Gane-R1-70b-v1.1",
"base_model:quantized:Steelskull/L3.3-Mokume-Gane-R1-70b-v1.1",
"license:llama3.3",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-03-02T17:53:05Z |
---
quantized_by: bartowski
pipeline_tag: text-generation
base_model: Steelskull/L3.3-Mokume-Gane-R1-70b-v1.1
license: llama3.3
---
## Llamacpp imatrix Quantizations of L3.3-Mokume-Gane-R1-70b-v1.1 by Steelskull
Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b4792">b4792</a> for quantization.
Original model: https://huggingface.co/Steelskull/L3.3-Mokume-Gane-R1-70b-v1.1
All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
Run them in [LM Studio](https://lmstudio.ai/)
Run them directly with [llama.cpp](https://github.com/ggerganov/llama.cpp), or any other llama.cpp based project
## Prompt format
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Cutting Knowledge Date: December 2023
Today Date: 26 Jul 2024
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Download a file (not the whole branch) from below:
| Filename | Quant type | File Size | Split | Description |
| -------- | ---------- | --------- | ----- | ----------- |
| [L3.3-Mokume-Gane-R1-70b-v1.1-Q8_0.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/tree/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-Q8_0) | Q8_0 | 74.98GB | true | Extremely high quality, generally unneeded but max available quant. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-Q6_K.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/tree/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-Q6_K) | Q6_K | 57.89GB | true | Very high quality, near perfect, *recommended*. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-Q5_K_M.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/tree/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-Q5_K_M) | Q5_K_M | 49.95GB | true | High quality, *recommended*. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-Q5_K_S.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/blob/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-Q5_K_S.gguf) | Q5_K_S | 48.66GB | false | High quality, *recommended*. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-Q4_1.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/blob/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-Q4_1.gguf) | Q4_1 | 44.31GB | false | Legacy format, similar performance to Q4_K_S but with improved tokens/watt on Apple silicon. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-Q4_K_L.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/blob/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-Q4_K_L.gguf) | Q4_K_L | 43.30GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-Q4_K_M.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/blob/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-Q4_K_M.gguf) | Q4_K_M | 42.52GB | false | Good quality, default size for most use cases, *recommended*. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-Q4_K_S.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/blob/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-Q4_K_S.gguf) | Q4_K_S | 40.35GB | false | Slightly lower quality with more space savings, *recommended*. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-Q4_0.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/blob/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-Q4_0.gguf) | Q4_0 | 40.12GB | false | Legacy format, offers online repacking for ARM and AVX CPU inference. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-IQ4_NL.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/blob/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-IQ4_NL.gguf) | IQ4_NL | 40.05GB | false | Similar to IQ4_XS, but slightly larger. Offers online repacking for ARM CPU inference. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-Q3_K_XL.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/blob/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-Q3_K_XL.gguf) | Q3_K_XL | 38.06GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-IQ4_XS.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/blob/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-IQ4_XS.gguf) | IQ4_XS | 37.90GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-Q3_K_L.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/blob/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-Q3_K_L.gguf) | Q3_K_L | 37.14GB | false | Lower quality but usable, good for low RAM availability. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-Q3_K_M.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/blob/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-Q3_K_M.gguf) | Q3_K_M | 34.27GB | false | Low quality. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-IQ3_M.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/blob/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-IQ3_M.gguf) | IQ3_M | 31.94GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-Q3_K_S.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/blob/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-Q3_K_S.gguf) | Q3_K_S | 30.91GB | false | Low quality, not recommended. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-IQ3_XS.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/blob/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-IQ3_XS.gguf) | IQ3_XS | 29.31GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-IQ3_XXS.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/blob/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-IQ3_XXS.gguf) | IQ3_XXS | 27.47GB | false | Lower quality, new method with decent performance, comparable to Q3 quants. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-Q2_K_L.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/blob/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-Q2_K_L.gguf) | Q2_K_L | 27.40GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-Q2_K.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/blob/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-Q2_K.gguf) | Q2_K | 26.38GB | false | Very low quality but surprisingly usable. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-IQ2_M.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/blob/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-IQ2_M.gguf) | IQ2_M | 24.12GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-IQ2_S.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/blob/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-IQ2_S.gguf) | IQ2_S | 22.24GB | false | Low quality, uses SOTA techniques to be usable. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-IQ2_XS.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/blob/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-IQ2_XS.gguf) | IQ2_XS | 21.14GB | false | Low quality, uses SOTA techniques to be usable. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-IQ2_XXS.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/blob/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-IQ2_XXS.gguf) | IQ2_XXS | 19.10GB | false | Very low quality, uses SOTA techniques to be usable. |
| [L3.3-Mokume-Gane-R1-70b-v1.1-IQ1_M.gguf](https://huggingface.co/bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF/blob/main/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-IQ1_M.gguf) | IQ1_M | 16.75GB | false | Extremely low quality, *not* recommended. |
## Embed/output weights
Some of these quants (Q3_K_XL, Q4_K_L etc) are the standard quantization method with the embeddings and output weights quantized to Q8_0 instead of what they would normally default to.
## Downloading using huggingface-cli
<details>
<summary>Click to view download instructions</summary>
First, make sure you have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Then, you can target the specific file you want:
```
huggingface-cli download bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF --include "Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-Q4_K_M.gguf" --local-dir ./
```
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download bartowski/Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-GGUF --include "Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-Q8_0/*" --local-dir ./
```
You can either specify a new local-dir (Steelskull_L3.3-Mokume-Gane-R1-70b-v1.1-Q8_0) or download them all in place (./)
</details>
## ARM/AVX information
Previously, you would download Q4_0_4_4/4_8/8_8, and these would have their weights interleaved in memory in order to improve performance on ARM and AVX machines by loading up more data in one pass.
Now, however, there is something called "online repacking" for weights. details in [this PR](https://github.com/ggerganov/llama.cpp/pull/9921). If you use Q4_0 and your hardware would benefit from repacking weights, it will do it automatically on the fly.
As of llama.cpp build [b4282](https://github.com/ggerganov/llama.cpp/releases/tag/b4282) you will not be able to run the Q4_0_X_X files and will instead need to use Q4_0.
Additionally, if you want to get slightly better quality for , you can use IQ4_NL thanks to [this PR](https://github.com/ggerganov/llama.cpp/pull/10541) which will also repack the weights for ARM, though only the 4_4 for now. The loading time may be slower but it will result in an overall speed incrase.
<details>
<summary>Click to view Q4_0_X_X information (deprecated</summary>
I'm keeping this section to show the potential theoretical uplift in performance from using the Q4_0 with online repacking.
<details>
<summary>Click to view benchmarks on an AVX2 system (EPYC7702)</summary>
| model | size | params | backend | threads | test | t/s | % (vs Q4_0) |
| ------------------------------ | ---------: | ---------: | ---------- | ------: | ------------: | -------------------: |-------------: |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp512 | 204.03 ± 1.03 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp1024 | 282.92 ± 0.19 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | pp2048 | 259.49 ± 0.44 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg128 | 39.12 ± 0.27 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg256 | 39.31 ± 0.69 | 100% |
| qwen2 3B Q4_0 | 1.70 GiB | 3.09 B | CPU | 64 | tg512 | 40.52 ± 0.03 | 100% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp512 | 301.02 ± 1.74 | 147% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp1024 | 287.23 ± 0.20 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | pp2048 | 262.77 ± 1.81 | 101% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg128 | 18.80 ± 0.99 | 48% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg256 | 24.46 ± 3.04 | 83% |
| qwen2 3B Q4_K_M | 1.79 GiB | 3.09 B | CPU | 64 | tg512 | 36.32 ± 3.59 | 90% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp512 | 271.71 ± 3.53 | 133% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp1024 | 279.86 ± 45.63 | 100% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | pp2048 | 320.77 ± 5.00 | 124% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg128 | 43.51 ± 0.05 | 111% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg256 | 43.35 ± 0.09 | 110% |
| qwen2 3B Q4_0_8_8 | 1.69 GiB | 3.09 B | CPU | 64 | tg512 | 42.60 ± 0.31 | 105% |
Q4_0_8_8 offers a nice bump to prompt processing and a small bump to text generation
</details>
</details>
## Which file should I choose?
<details>
<summary>Click here for details</summary>
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
If you want to get more into the weeds, you can check out this extremely useful feature chart:
[llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
</details>
## Credits
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
Thank you ZeroWw for the inspiration to experiment with embed/output.
Thank you to LM Studio for sponsoring my work.
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
|
RichardErkhov/SongTonyLi_-_OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge-gguf
|
RichardErkhov
| 2025-03-02T22:52:23Z | 0 | 0 | null |
[
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-02T22:39:41Z |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge - GGUF
- Model creator: https://huggingface.co/SongTonyLi/
- Original model: https://huggingface.co/SongTonyLi/OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q2_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge-gguf/blob/main/OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q2_K.gguf) | Q2_K | 0.18GB |
| [OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge-gguf/blob/main/OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.IQ3_XS.gguf) | IQ3_XS | 0.2GB |
| [OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.IQ3_S.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge-gguf/blob/main/OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.IQ3_S.gguf) | IQ3_S | 0.2GB |
| [OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge-gguf/blob/main/OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q3_K_S.gguf) | Q3_K_S | 0.2GB |
| [OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.IQ3_M.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge-gguf/blob/main/OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.IQ3_M.gguf) | IQ3_M | 0.21GB |
| [OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q3_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge-gguf/blob/main/OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q3_K.gguf) | Q3_K | 0.23GB |
| [OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge-gguf/blob/main/OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q3_K_M.gguf) | Q3_K_M | 0.23GB |
| [OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge-gguf/blob/main/OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q3_K_L.gguf) | Q3_K_L | 0.24GB |
| [OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge-gguf/blob/main/OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.IQ4_XS.gguf) | IQ4_XS | 0.24GB |
| [OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q4_0.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge-gguf/blob/main/OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q4_0.gguf) | Q4_0 | 0.25GB |
| [OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge-gguf/blob/main/OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.IQ4_NL.gguf) | IQ4_NL | 0.25GB |
| [OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge-gguf/blob/main/OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q4_K_S.gguf) | Q4_K_S | 0.25GB |
| [OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q4_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge-gguf/blob/main/OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q4_K.gguf) | Q4_K | 0.27GB |
| [OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge-gguf/blob/main/OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q4_K_M.gguf) | Q4_K_M | 0.27GB |
| [OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q4_1.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge-gguf/blob/main/OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q4_1.gguf) | Q4_1 | 0.28GB |
| [OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q5_0.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge-gguf/blob/main/OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q5_0.gguf) | Q5_0 | 0.3GB |
| [OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge-gguf/blob/main/OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q5_K_S.gguf) | Q5_K_S | 0.3GB |
| [OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q5_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge-gguf/blob/main/OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q5_K.gguf) | Q5_K | 0.31GB |
| [OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge-gguf/blob/main/OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q5_K_M.gguf) | Q5_K_M | 0.31GB |
| [OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q5_1.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge-gguf/blob/main/OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q5_1.gguf) | Q5_1 | 0.32GB |
| [OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q6_K.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge-gguf/blob/main/OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q6_K.gguf) | Q6_K | 0.35GB |
| [OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q8_0.gguf](https://huggingface.co/RichardErkhov/SongTonyLi_-_OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge-gguf/blob/main/OpenELM-450M-SFT-D1_chosen-HuggingFaceH4-ultrafeedback_binarized-Xlarge.Q8_0.gguf) | Q8_0 | 0.45GB |
Original model description:
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DevQuasar/prithivMLmods.Tucana-Opus-14B-r999-GGUF
|
DevQuasar
| 2025-03-02T22:51:27Z | 0 | 0 | null |
[
"gguf",
"text-generation",
"base_model:prithivMLmods/Tucana-Opus-14B-r999",
"base_model:quantized:prithivMLmods/Tucana-Opus-14B-r999",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-03-02T21:35:56Z |
---
base_model:
- prithivMLmods/Tucana-Opus-14B-r999
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [prithivMLmods/Tucana-Opus-14B-r999](https://huggingface.co/prithivMLmods/Tucana-Opus-14B-r999)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
DrGwin/setfit-paraphrase-mpnet-base-v2-sst2
|
DrGwin
| 2025-03-02T22:50:41Z | 0 | 0 |
setfit
|
[
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] |
text-classification
| 2025-03-02T22:50:24Z |
---
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 'green might want to hang onto that ski mask , as robbery may be the only
way to pay for his next project . '
- text: 'even horror fans will most likely not find what they ''re seeking with trouble
every day ; the movie lacks both thrills and humor . '
- text: 'the acting , costumes , music , cinematography and sound are all astounding
given the production ''s austere locales . '
- text: 'byler reveals his characters in a way that intrigues and even fascinates
us , and he never reduces the situation to simple melodrama . '
- text: 'a sequence of ridiculous shoot - ''em - up scenes . '
metrics:
- accuracy
pipeline_tag: text-classification
library_name: setfit
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.8484455958549223
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:---------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| positive | <ul><li>'a powerful and reasonably fulfilling gestalt '</li><li>'while the importance of being earnest offers opportunities for occasional smiles and chuckles '</li><li>'the proud warrior that still lingers in the souls of these characters '</li></ul> |
| negative | <ul><li>'hate yourself '</li><li>'eight crazy nights is a total misfire . '</li><li>'guilty about it '</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8484 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("DrGwin/setfit-paraphrase-mpnet-base-v2-sst2")
# Run inference
preds = model("a sequence of ridiculous shoot - 'em - up scenes . ")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 3 | 7.875 | 18 |
| Label | Training Sample Count |
|:---------|:----------------------|
| negative | 8 |
| positive | 8 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (4, 4)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.1111 | 1 | 0.2847 | - |
| 1.0 | 9 | - | 0.2303 |
| 2.0 | 18 | - | 0.1917 |
| 3.0 | 27 | - | 0.1718 |
| 4.0 | 36 | - | 0.1715 |
### Framework Versions
- Python: 3.11.11
- SetFit: 1.1.1
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.5.1+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
miike-ai/r1-12b-medical-lora
|
miike-ai
| 2025-03-02T22:48:49Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:miike-ai/r1-12b",
"base_model:adapter:miike-ai/r1-12b",
"region:us"
] | null | 2025-03-02T21:52:44Z |
---
base_model: miike-ai/r1-12b
library_name: peft
---
|
VoidStare/WinterEngine-24B-Instruct-EXL2-6.5bpw-h8
|
VoidStare
| 2025-03-02T22:48:34Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"merge",
"mergekit",
"lazymergekit",
"PocketDoc/Dans-PersonalityEngine-V1.2.0-24b",
"SicariusSicariiStuff/Redemption_Wind_24B",
"base_model:PocketDoc/Dans-PersonalityEngine-V1.2.0-24b",
"base_model:merge:PocketDoc/Dans-PersonalityEngine-V1.2.0-24b",
"base_model:SicariusSicariiStuff/Redemption_Wind_24B",
"base_model:merge:SicariusSicariiStuff/Redemption_Wind_24B",
"exl2",
"region:us"
] | null | 2025-03-02T22:41:37Z |
---
base_model:
- PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
- SicariusSicariiStuff/Redemption_Wind_24B
tags:
- merge
- mergekit
- lazymergekit
- PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
- SicariusSicariiStuff/Redemption_Wind_24B
---
# WinterEngine-24B-Instruct
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head>
<div class="winter-container">
<div class="winter-case">
<div class="winter-inner-case">
<div class="winter-bezel">
<div class="terminal-screen">
<div style="text-align: center;">
<h2 style="color: #8ecae6; font-size: 32px;">WinterEngine-24B-Instruct</h2>
<pre class="code-block" style="display: inline-block; text-align: middle; white-space: pre; color: #ffffff;">
❄ ❄
❄ ❄
❄ ❄❄❄ ❄
❄❄❄❄❄❄❄❄❄
❄ ❄❄❄ ❄
❄ ❄
❄ ❄
</pre>
</div>
<h3 style="color: #8ecae6;">Key Details</h3>
<pre class="code-block" style="color: #ffffff; background: linear-gradient(135deg, #219ebc, #8ecae6);">
BASE MODEL: mistralai/Mistral-Small-24B-Base-2501
LICENSE: apache-2.0
LANGUAGE: English
CONTEXT LENGTH: 32768 tokens</pre>
<h3 style="color: #8ecae6;">Recommended Settings</h3>
<pre class="code-block" style="color: #ffffff; background: linear-gradient(135deg, #219ebc, #8ecae6);">
TEMPERATURE: 1.2
MIN_P: 0.05
(Everything Else Neutral MEME Samplers Too.)
</pre>
<h3 style="color: #8ecae6;">Prompting Format</h3>
<pre class="code-block" style="color: #ffffff; background: linear-gradient(135deg, #219ebc, #8ecae6);">
<|im_start|>system
system prompt<|im_end|>
<|im_start|>user
Hello, WinterEngine!<|im_end|>
<|im_start|>assistant
Hello! How can I help you today?<|im_end|></pre>
<h3 style="color: #8ecae6;">Quants</h3>
<pre class="code-block"style="color: #ffffff; background: linear-gradient(135deg, #219ebc, #8ecae6);">
I-mat: https://huggingface.co/mradermacher/WindEngine-24B-Instruct-i1-GGUF
Normal: https://huggingface.co/mradermacher/WindEngine-24B-Instruct-GGUF
<h2>Big Thanks to mradermacher for the Quants.</h2></pre>
<h3 style="color: #8ecae6;">Story</h3>
<pre class="code-block" style="color: #ffffff; background: linear-gradient(135deg, #219ebc, #8ecae6);">
You can ignore this if you want, but I just wanted to share something.
I was trying to create a model that follows prompts well, stays uncensored, and brings a lot of creativity — especially with roleplay capabilities.
Started out using the base 24B Instruct model — it was decent, but felt a bit dry and overly censored.
So, I began testing and merging different models.
Then found PersonalityEngine 24B, which followed instructions well and had solid roleplay potential, though it felt a little bland.
Discovered Redemption Winds — much better at roleplay, but not as strong when it came to following instructions. After trying three different model merges, this pairing turned out to be the best combination.
[The result? A model that follows instructions, excels at roleplay, and — for my single folks out there — works great for AI girlfriend roleplay, too.] </pre>
</div>
</div>
</div>
</div>
</div>
<style>
@import url('https://fonts.googleapis.com/css2?family=Fira+Code&display=swap');
.winter-container { background-color: #edf6f9; padding: 20px; border-radius: 20px; }
.winter-case { border: 2px solid #8ecae6; padding: 10px; }
.terminal-screen { background-color: #023047; color: #ffb703; padding: 15px; border-radius: 15px; font-family: 'Fira Code', monospace; }
.code-block { background: #219ebc; padding: 10px; border-radius: 10px; }
</style>
[LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [PocketDoc/Dans-PersonalityEngine-V1.2.0-24b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.2.0-24b)
* [SicariusSicariiStuff/Redemption_Wind_24B](https://huggingface.co/SicariusSicariiStuff/Redemption_Wind_24B)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
layer_range: [0, 40]
- model: SicariusSicariiStuff/Redemption_Wind_24B
layer_range: [0, 40]
merge_method: slerp
base_model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
visheratin/nllb-clip-base-siglip
|
visheratin
| 2025-03-02T22:47:48Z | 703 | 1 |
open_clip
|
[
"open_clip",
"clip",
"zero-shot-image-classification",
"dataset:visheratin/laion-coco-nllb",
"arxiv:2309.01859",
"license:cc-by-nc-4.0",
"region:us"
] |
zero-shot-image-classification
| 2023-11-14T04:12:01Z |
---
tags:
- clip
library_name: open_clip
pipeline_tag: zero-shot-image-classification
license: cc-by-nc-4.0
datasets:
- visheratin/laion-coco-nllb
new_version: visheratin/mexma-siglip2
---
## Model Summary
NLLB-CLIP-SigLIP is a model that combines a text encoder from the [NLLB model](https://huggingface.co/facebook/nllb-200-distilled-600M) and an image encoder from the
[SigLIP](https://huggingface.co/timm/ViT-B-16-SigLIP-384) model. This allows us to extend the model capabilities
to 201 languages of the Flores-200. NLLB-CLIP sets state-of-the-art on the [Crossmodal-3600](https://google.github.io/crossmodal-3600/) dataset by performing very
well on low-resource languages. You can find more details about the model in the [paper](https://arxiv.org/abs/2309.01859).
This version performs much better than the [standard](https://huggingface.co/visheratin/nllb-clip-base-oc) version. You can see the results
[here](https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_multilingual_retrieval_results.csv) and
[here](https://github.com/gregor-ge/Babel-ImageNet/blob/main/evaluation_scripts/results_analysis.ipynb).
<b>NB: There is even better [version](https://huggingface.co/visheratin/nllb-siglip-mrl-base) of this model available!</b>
## How to use
<a target="_blank" href="https://colab.research.google.com/drive/1TE_jln3SwTDzjFsGqbdxIJkwrUlnNs3i">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
This model is integrated into OpenCLIP so that you can use it as any other model:
```
!pip install -U open_clip_torch
```
```
from open_clip import create_model_from_pretrained, get_tokenizer
from PIL import Image
import requests
import torch
model, transform = create_model_from_pretrained("nllb-clip-base-siglip", "v1", device="cuda")
tokenizer = get_tokenizer("nllb-clip-base-siglip")
class_options = ["бабочка", "butterfly", "kat"]
class_langs = ["rus_Cyrl", "eng_Latn", "afr_Latn"]
text_inputs = []
for i in range(len(class_options)):
tokenizer.set_language(class_langs[i])
text_inputs.append(tokenizer(class_options[i]))
text_inputs = torch.stack(text_inputs).squeeze(1).to("cuda")
image_path = "https://huggingface.co/spaces/jjourney1125/swin2sr/resolve/main/samples/butterfly.jpg"
image = Image.open(requests.get(image_path, stream=True).raw)
image_inputs = transform(image).unsqueeze(0).to("cuda")
with torch.inference_mode():
logits_per_image, logits_per_text = model.get_logits(image_inputs, text_inputs)
print(logits_per_image.softmax(dim=-1))
```
## Acknowledgements
I thank [ML Collective](https://mlcollective.org/) for providing Google Cloud compute resources to train the OpenCLIP-compatible version of NLLB-CLIP.
|
visheratin/nllb-clip-large-siglip
|
visheratin
| 2025-03-02T22:47:34Z | 738 | 4 |
open_clip
|
[
"open_clip",
"clip",
"zero-shot-image-classification",
"dataset:visheratin/laion-coco-nllb",
"arxiv:2309.01859",
"license:cc-by-nc-4.0",
"region:us"
] |
zero-shot-image-classification
| 2023-11-14T03:28:57Z |
---
tags:
- clip
library_name: open_clip
pipeline_tag: zero-shot-image-classification
license: cc-by-nc-4.0
datasets:
- visheratin/laion-coco-nllb
new_version: visheratin/mexma-siglip2
---
## Model Summary
NLLB-CLIP-SigLIP is a model that combines a text encoder from the [NLLB model](https://huggingface.co/facebook/nllb-200-distilled-1.3B) and an image encoder from the
[SigLIP](https://huggingface.co/timm/ViT-SO400M-14-SigLIP-384) model. This allows us to extend the model capabilities
to 201 languages of the Flores-200. NLLB-CLIP sets state-of-the-art on the [Crossmodal-3600](https://google.github.io/crossmodal-3600/) dataset by performing very
well on low-resource languages. You can find more details about the model in the [paper](https://arxiv.org/abs/2309.01859).
This version performs much better than the [standard](https://huggingface.co/visheratin/nllb-clip-large-oc) version. You can see the results
[here](https://github.com/mlfoundations/open_clip/blob/main/docs/openclip_multilingual_retrieval_results.csv) and
[here](https://github.com/gregor-ge/Babel-ImageNet/blob/main/evaluation_scripts/results_analysis.ipynb).
<b>NB: There is even better [version](https://huggingface.co/visheratin/nllb-siglip-mrl-large) of this model available!</b>
## How to use
<a target="_blank" href="https://colab.research.google.com/drive/1TE_jln3SwTDzjFsGqbdxIJkwrUlnNs3i">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
This model is integrated into OpenCLIP so that you can use it as any other model:
```
!pip install -U open_clip_torch
```
```
from open_clip import create_model_from_pretrained, get_tokenizer
from PIL import Image
import requests
import torch
model, transform = create_model_from_pretrained("nllb-clip-large-siglip", "v1", device="cuda")
tokenizer = get_tokenizer("nllb-clip-large-siglip")
class_options = ["бабочка", "butterfly", "kat"]
class_langs = ["rus_Cyrl", "eng_Latn", "afr_Latn"]
text_inputs = []
for i in range(len(class_options)):
tokenizer.set_language(class_langs[i])
text_inputs.append(tokenizer(class_options[i]))
text_inputs = torch.stack(text_inputs).squeeze(1).to("cuda")
image_path = "https://huggingface.co/spaces/jjourney1125/swin2sr/resolve/main/samples/butterfly.jpg"
image = Image.open(requests.get(image_path, stream=True).raw)
image_inputs = transform(image).unsqueeze(0).to("cuda")
with torch.inference_mode():
logits_per_image, logits_per_text = model.get_logits(image_inputs, text_inputs)
print(logits_per_image.softmax(dim=-1))
```
## Acknowledgements
I thank [ML Collective](https://mlcollective.org/) for providing Google Cloud compute resources to train the OpenCLIP-compatible version of NLLB-CLIP.
|
yiwenxxc/lab2_adam
|
yiwenxxc
| 2025-03-02T22:44:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2025-03-02T21:53:48Z |
---
library_name: transformers
license: apache-2.0
base_model: Helsinki-NLP/opus-mt-en-fr
tags:
- generated_from_trainer
datasets:
- kde4
metrics:
- bleu
model-index:
- name: lab2_adam
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: en-fr
split: train
args: en-fr
metrics:
- name: Bleu
type: bleu
value: 43.24239593300917
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lab2_adam
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3738
- Model Preparation Time: 0.0069
- Bleu: 43.2424
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 100
### Training results
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
texanrangee/37fda8fe-7cfc-4d7f-b379-c23a65028630
|
texanrangee
| 2025-03-02T22:43:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-02T19:04:55Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Paulington/L1
|
Paulington
| 2025-03-02T22:43:10Z | 0 | 0 | null |
[
"arxiv:1910.09700",
"region:us"
] | null | 2025-03-02T22:24:08Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LHRuig/sushantsx
|
LHRuig
| 2025-03-02T22:37:53Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-03-02T22:36:07Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: sushantsx
---
# sushantsx
<Gallery />
## Model description
sushantsx lora
## Trigger words
You should use `sushantsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/sushantsx/tree/main) them in the Files & versions tab.
|
LHRuig/dhruvsx
|
LHRuig
| 2025-03-02T22:37:51Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-03-02T22:35:03Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: dhruvsx
---
# dhruvsx
<Gallery />
## Model description
dhruvsx lora
## Trigger words
You should use `dhruvsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/dhruvsx/tree/main) them in the Files & versions tab.
|
LHRuig/ramysx
|
LHRuig
| 2025-03-02T22:37:46Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-03-02T22:36:44Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ramysx
---
# ramysx
<Gallery />
## Model description
ramysx lora
## Trigger words
You should use `ramysx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/ramysx/tree/main) them in the Files & versions tab.
|
wATCH-Sophie-Rain-Spiderman-New-Videos-HD/Sophie.Rain.SpiderMan.Video.Twitter
|
wATCH-Sophie-Rain-Spiderman-New-Videos-HD
| 2025-03-02T22:36:04Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-03-02T22:35:12Z |
19 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
|
MonirahQQ/Mistral_discharge_summary_Subtraining_500_newinstruct
|
MonirahQQ
| 2025-03-02T22:35:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit",
"base_model:finetune:unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-02T11:27:31Z |
---
base_model: unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** MonirahQQ
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Mistral-Nemo-Instruct-2407-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
regmibijay/ops-klassifikation-v1
|
regmibijay
| 2025-03-02T22:35:34Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"medical",
"de",
"dataset:regmibijay/ops-volltext-klassifizierung",
"base_model:google-bert/bert-base-german-cased",
"base_model:finetune:google-bert/bert-base-german-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-03-02T01:08:14Z |
---
license: mit
datasets:
- regmibijay/ops-volltext-klassifizierung
language:
- de
metrics:
- accuracy
base_model:
- google-bert/bert-base-german-cased
pipeline_tag: text-classification
library_name: transformers
tags:
- medical
---
# Klassifikationsmodell für OPS-Codes
## ⚠️ Wichtiger Hinweis
Dieses Modell ist nicht für den Produktionseinsatz geeignet und dient lediglich als Demonstration.
## Einführung
OPS-Codes (Operationen- und Prozedurenschlüssel) sind ein wesentlicher Bestandteil des deutschen Gesundheitssystems. Sie werden zur Klassifikation von medizinischen Prozeduren verwendet und sind entscheidend für die Abrechnung und statistische Auswertung im Gesundheitswesen.
## Haftungsausschluss
Die Daten, die zum Trainieren des Modells verwendet wurden, wurden von [gesund.bund.de](https://gesund.bund.de) gescraped und sind Eigentum des Urheberrechtsinhabers. Der alleinige Zweck dieses Datensatzes und der zugehörigen Codebasis sowie anderer Materialien ist es, die deutsche medizinische Gemeinschaft bei der Erstellung hochspezialisierter deutscher Modelle zu unterstützen.
Originaler Datensatz: [Hugging Face Dataset](https://huggingface.co/datasets/regmibijay/ops-volltext-klassifizierung)
Wenn Sie an vorab geparsten Daten interessiert sind, die als Baseline für diese synthetischen Daten verwendet wurden, können Sie diese unter folgender Adresse abrufen: [regmi.dev/ops](https://regmi.dev/ops)
## Hardware
Das Modell wurde auf einem Nvidia Jetson Nano Super trainiert: [Silicon Highway Direct](https://www.siliconhighwaydirect.com/product-p/945-13766-0005-000.htm)
## Metadaten zum Training
- Anzahl der Epochen: 20
- Accuracy: 0.8083
- Precision: 0.8323
- Recall: 0.8083
- F1-Score: 0.8042
## Voraussetzungen
- Python 3.12 (Dies ist die Version, die wir verwendet haben; andere Versionen könnten ebenfalls kompatibel sein.)
## Installation
Um das Modell zu verwenden, installieren Sie die folgende Version des Transformers-Pakets:
```bash
pip install transformers==4.49.0
```
## Ausgabe
- `label`: Der vom Modell erkannte OPS-Code
- `score`: Confidence-Score des Modells (1 entspricht 100%: höchste Confidence, 0 entspricht 0% Confidence)
## Verwendung
Das Modell kann wie folgt in den Speicher geladen werden:
```python
from transformers import AutoModelForSequenceClassification
from transformers import AutoTokenizer
from transformers import pipeline
model = AutoModelForSequenceClassification.from_pretrained("regmibijay/ops-klassifikation-v1")
tokenizer = AutoTokenizer.from_pretrained("regmibijay/ops-klassifikation-v1")
classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)
```
Jetzt ist das Modell bereit zur Verwendung:
- Klassifikation eines einzelnen Strings
```python
classifier("Arthroskopische Operation am Tarsalgelenk")
```
ergibt die folgende Ausgabe:
```python
[{'label': '5-810.2n', 'score': 0.6497781872749329}]
```
- Klassifikation mehrerer Strings
```python
texts = ["Arthroskopische Operation am Tarsalgelenk", "Appendektomie"]
results = classifier(texts)
```
ergibt die folgende Ausgabe:
```python
[
{'label': '5-810.2n', 'score': 0.6497781872749329},
{'label': '5-470.0', 'score': 0.7321234567890123}
]
```
Die Ausgabe kann hier überprüft werden: [regmi.dev/ops_codes.html](https://regmi.dev/ops_codes.html)
## Allgemeine Leistungsüberlegungen
Das Modell ist für Demonstrationszwecke optimiert und kann in ressourcenbeschränkten Umgebungen wie dem Nvidia Jetson Nano Super betrieben werden. Beachten Sie jedoch, dass die Leistung je nach Hardware und Eingabedaten variieren kann.
## Mitwirken
Beiträge sind jederzeit willkommen. Unterstützung erhalten Sie unter: [email protected]
## Lizenz
Dieses Projekt steht unter der MIT-Lizenz.
## Über mich
- Github: https://github.com/regmibijay
- Blog: https://blog.regmi.dev/blog/data-engineering-4/ein-medizinisches-ki-modell-mit-synthetischen-daten-9
- Impressum: https://blog.regmi.dev/legal-stuff
|
LHRuig/arendrasx
|
LHRuig
| 2025-03-02T22:35:04Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-03-02T22:33:47Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: arendrasx
---
# arendrasx
<Gallery />
## Model description
arendrasx lora
## Trigger words
You should use `arendrasx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/arendrasx/tree/main) them in the Files & versions tab.
|
araziziml/q05_v1
|
araziziml
| 2025-03-02T22:34:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-02T22:33:07Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
LHRuig/siddarthsx
|
LHRuig
| 2025-03-02T22:34:44Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-03-02T22:34:21Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: siddarthsx
---
# siddarthsx
<Gallery />
## Model description
siddarthsx lora
## Trigger words
You should use `siddarthsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/siddarthsx/tree/main) them in the Files & versions tab.
|
locuslab/mix_ift_v4-smollm2-1.7b-score0_baseline60p_then_mix_rephrase_with_refusal-600B
|
locuslab
| 2025-03-02T22:34:17Z | 0 | 0 | null |
[
"safetensors",
"llama",
"model",
"transformer",
"smollm2",
"license:mit",
"region:us"
] | null | 2025-03-02T22:32:26Z |
---
version: main
family: smollm2-1.7b
model_name: score0_baseline60p_then_mix_rephrase_with_refusal-600B
license: mit
tags:
- model
- transformer
- smollm2
---
# SmolLM2 score0_baseline60p_then_mix_rephrase_with_refusal-600B (Version: main)
## Model Details
- **Architecture:** SmolLM2
- **Parameters:** 1.7B
## Training Configuration
```yaml
optimizer:
class_path: torch.optim.AdamW
init_args:
lr: 0.0005
weight_decay: 0.01
precision: bf16-mixed
seed: 42
train:
global_batch_size: 1024
max_seq_length: 2048
max_tokens: 600000000000
micro_batch_size: 8
```
## Model Loading and Revision System
This repository hosts multiple revisions of the model.
To load a specific revision, use the `revision` parameter. For example:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("locuslab/score0_baseline60p_then_mix_rephrase_with_refusal-600B", revision="final")
tokenizer = AutoTokenizer.from_pretrained("locuslab/score0_baseline60p_then_mix_rephrase_with_refusal-600B", revision="final")
```
Replace `"final"` with the desired revision.
|
Jovie/Anime6
|
Jovie
| 2025-03-02T22:32:54Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-03-02T22:32:04Z |
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: anime
---
# Anime model style
<Gallery />
## Model description
## Trigger words
You should use `` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jovie/Anime6/tree/main) them in the Files & versions tab.
|
wATCH-Sophie-Rain-Spiderman-Scandal-New-Vi/Sophie.Rain.Spider-Man.Video.Tutorial
|
wATCH-Sophie-Rain-Spiderman-Scandal-New-Vi
| 2025-03-02T22:32:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-03-02T22:32:29Z |
19 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">►►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► 𝙁𝙪𝙡𝙡 𝙑𝙞𝙙𝙚𝙤️</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">🔴►𝐂𝐋𝐈𝐂𝐊 𝐇𝐄𝐑𝐄 🌐==►► 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐍𝐨𝐰⬇️⬇️</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
|
LHRuig/beefymansx
|
LHRuig
| 2025-03-02T22:31:58Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-03-02T22:30:58Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: beefymansx
---
# beefymansx
<Gallery />
## Model description
beefymansx lora
## Trigger words
You should use `beefymansx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/beefymansx/tree/main) them in the Files & versions tab.
|
araziziml/q32_v1
|
araziziml
| 2025-03-02T22:29:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-02T22:23:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NguyenDuyPhuc/gemma-2-2B-it-thinking-function_calling-V0
|
NguyenDuyPhuc
| 2025-03-02T22:29:06Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-03-02T22:08:22Z |
---
base_model: google/gemma-2-2b-it
library_name: transformers
model_name: gemma-2-2B-it-thinking-function_calling-V0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-2-2B-it-thinking-function_calling-V0
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="NguyenDuyPhuc/gemma-2-2B-it-thinking-function_calling-V0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
microsoft/Phi-3-mini-128k-instruct
|
microsoft
| 2025-03-02T22:28:37Z | 125,918 | 1,636 |
transformers
|
[
"transformers",
"safetensors",
"phi3",
"text-generation",
"nlp",
"code",
"conversational",
"custom_code",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-04-22T16:26:23Z |
---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
🎉**Phi-4**: [[multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct) | [onnx](https://huggingface.co/microsoft/Phi-4-multimodal-instruct-onnx)];
[[mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct) | [onnx](https://huggingface.co/microsoft/Phi-4-mini-instruct-onnx)]
## Model Summary
The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets.
This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties.
The model belongs to the Phi-3 family with the Mini version in two variants [4K](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) which is the context length (in tokens) that it can support.
After initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures.
When evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters.
Resources and Technical Documentation:
🏡 [Phi-3 Portal](https://azure.microsoft.com/en-us/products/phi-3) <br>
📰 [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024) <br>
📖 [Phi-3 Technical Report](https://aka.ms/phi3-tech-report) <br>
🛠️ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai) <br>
👩🍳 [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook) <br>
🖥️ [Try It](https://aka.ms/try-phi3)
| | Short Context | Long Context |
| :- | :- | :- |
| Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)|
| Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)|
| Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)|
| Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct-onnx-cuda)|
## Intended Uses
**Primary use cases**
The model is intended for commercial and research use in English. The model provides uses for applications which require:
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## Release Notes
This is an update over the original instruction-tuned Phi-3-mini release based on valuable customer feedback.
The model used additional post-training data leading to substantial gains on long-context understanding, instruction following, and structure output.
We also improve multi-turn conversation quality, explicitly support <|system|> tag, and significantly improve reasoning capability.
We believe most use cases will benefit from this release, but we encourage users to test in their particular AI applications.
We appreciate the enthusiastic adoption of the Phi-3 model family, and continue to welcome all feedback from the community.
These tables below highlights improvements on instruction following, structure output, reasoning, and long-context understanding of the new release on our public and internal benchmark datasets.
| Benchmarks | Original | June 2024 Update |
| :- | :- | :- |
| Instruction Extra Hard | 5.7 | 5.9 |
| Instruction Hard | 5.0 | 5.2 |
| JSON Structure Output | 1.9 | 60.1 |
| XML Structure Output | 47.8 | 52.9 |
| GPQA | 25.9 | 29.7 |
| MMLU | 68.1 | 69.7 |
| **Average** | **25.7** | **37.3** |
RULER: a retrieval-based benchmark for long context understanding
| Model | 4K | 8K | 16K | 32K | 64K | 128K | Average |
| :-------------------| :------| :------| :------| :------| :------| :------| :---------|
| Original | 86.7 | 78.1 | 75.6 | 70.3 | 58.9 | 43.3 | **68.8** |
| June 2024 Update | 92.4 | 91.1 | 90.8 | 87.9 | 79.8 | 65.6 | **84.6** |
RepoQA: a benchmark for long context code understanding
| Model | Python | C++ | Rust | Java | TypeScript | Average |
| :-------------------| :--------| :-----| :------| :------| :------------| :---------|
| Original | 27 | 29 | 40 | 33 | 33 | **32.4** |
| June 2024 Update | 85 | 63 | 72 | 93 | 72 | **77** |
Notes: if users would like to check out the previous version, use the git commit id **bb5bf1e4001277a606e11debca0ef80323e5f824**. For the model conversion, e.g. GGUF and other formats, we invite the community to experiment with various approaches and share your valuable feedback. Let's innovate together!
## How to Use
Phi-3 Mini-128K-Instruct has been integrated in the development version (4.41.3) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Examples of required packages:
```
flash_attn==2.5.8
torch==2.3.1
accelerate==0.31.0
transformers==4.41.2
```
Phi-3 Mini-128K-Instruct is also available in [Azure AI Studio](https://aka.ms/try-phi3)
### Tokenizer
Phi-3 Mini-128K-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Chat Format
Given the nature of the training data, the Phi-3 Mini-128K-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|system|>
You are a helpful assistant.<|end|>
<|user|>
Question?<|end|>
<|assistant|>
```
For example:
```markdown
<|system|>
You are a helpful assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|system|>
You are a helpful travel assistant.<|end|>
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model = AutoModelForCausalLM.from_pretrained(
"microsoft/Phi-3-mini-128k-instruct",
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained("microsoft/Phi-3-mini-128k-instruct")
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
Notes: If you want to use flash attention, call _AutoModelForCausalLM.from_pretrained()_ with _attn_implementation="flash_attention_2"_
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3 Mini-128K-Instruct has 3.8B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 128K tokens
* GPUs: 512 H100-80G
* Training time: 10 days
* Training data: 4.9T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between May and June 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
* Release dates: June, 2024.
### Datasets
Our training data includes a wide variety of sources, totaling 4.9 trillion tokens, and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report).
### Fine-tuning
A basic example of multi-GPUs supervised fine-tuning (SFT) with TRL and Accelerate modules is provided [here](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/sample_finetune.py).
## Benchmarks
We report the results under completion format for Phi-3-Mini-128K-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mistral-7b-v0.1, Mixtral-8x7b, Gemma 7B, Llama-3-8B-Instruct, and GPT-3.5.
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
| Category | Benchmark | Phi-3-Mini-128K-Ins | Gemma-7B | Mistral-7B | Mixtral-8x7B | Llama-3-8B-Ins | GPT3.5-Turbo-1106 |
| :----------| :-----------| :---------------------| :----------| :------------| :--------------| :----------------| :-------------------|
| Popular aggregated benchmark | AGI Eval <br>5-shot| 39.5 | 42.1 | 35.1 | 45.2 | 42 | 48.4 |
| | MMLU <br>5-shot | 69.7 | 63.6 | 61.7 | 70.5 | 66.5 | 71.4 |
| | BigBench Hard <br>3-shot | 72.1 | 59.6 | 57.3 | 69.7 | 51.5 | 68.3 |
| Language Understanding | ANLI <br>7-shot | 52.3 | 48.7 | 47.1 | 55.2 | 57.3 | 58.1 |
| | HellaSwag <br>5-shot | 70.5 | 49.8 | 58.5 | 70.4 | 71.1 | 78.8 |
| Reasoning | ARC Challenge <br>10-shot | 85.5 | 78.3 | 78.6 | 87.3 | 82.8 | 87.4 |
| | BoolQ <br>0-shot | 77.1 | 66 | 72.2 | 76.6 | 80.9 | 79.1 |
| | MedQA <br>2-shot | 56.4 | 49.6 | 50 | 62.2 | 60.5 | 63.4 |
| | OpenBookQA <br>10-shot | 78.8 | 78.6 | 79.8 | 85.8 | 82.6 | 86 |
| | PIQA <br>5-shot | 80.1 | 78.1 | 77.7 | 86 | 75.7 | 86.6 |
| | GPQA <br>0-shot | 29.7 | 2.9 | 15 | 6.9 | 32.4 | 29.9 |
| | Social IQA <br>5-shot | 74.7 | 65.5 | 74.6 | 75.9 | 73.9 | 68.3 |
| | TruthfulQA (MC2) <br>10-shot | 64.8 | 52.1 | 53 | 60.1 | 63.2 | 67.7 |
| | WinoGrande <br>5-shot | 71.0 | 55.6 | 54.2 | 62 | 65 | 68.8 |
| Factual Knowledge | TriviaQA <br>5-shot | 57.8 | 72.3 | 75.2 | 82.2 | 67.7 | 85.8 |
| Math | GSM8K CoTT <br>8-shot | 85.3 | 59.8 | 46.4 | 64.7 | 77.4 | 78.1 |
| Code Generation | HumanEval <br>0-shot | 60.4 | 34.1 | 28.0 | 37.8 | 60.4 | 62.2 |
| | MBPP <br>3-shot | 70.0 | 51.5 | 50.8 | 60.2 | 67.7 | 77.8 |
| **Average** | | **66.4** | **56.0** | **56.4** | **64.4** | **65.5** | **70.3** |
**Long Context**: Phi-3 Mini-128K-Instruct supports 128K context length, therefore the model is capable of several long context tasks including long document/meeting summarization, long document QA.
| Benchmark | Phi-3 Mini-128K-Instruct | Mistral-7B | Mixtral 8x7B | LLaMA-3-8B-Instruct |
| :---------------| :--------------------------|:------------|:--------------|:---------------------|
| GovReport | 25.3 | 4.9 | 20.3 | 10.3 |
| QMSum | 21.9 | 15.5 | 20.6 | 2.9 |
| Qasper | 41.6 | 23.5 | 26.6 | 8.1 |
| SQuALITY | 24.1 | 14.7 | 16.2 | 25 |
| SummScreenFD | 16.8 | 9.3 | 11.3 | 5.1 |
| **Average** | **25.9** | **13.6** | **19.0** | **10.3** |
We take a closer look at different categories across 100 public benchmark datasets at the table below:
| Category | Phi-3-Mini-128K-Instruct | Gemma-7B | Mistral-7B | Mixtral 8x7B | Llama-3-8B-Instruct | GPT-3.5-Turbo |
|:----------|:--------------------------|:----------|:------------|:--------------|:---------------------|:---------------|
| Popular aggregated benchmark | 60.6 | 59.4 | 56.5 | 66.2 | 59.9 | 67.0 |
| Reasoning | 69.4 | 60.3 | 62.8 | 68.1 | 69.6 | 71.7 |
| Language understanding | 57.5 | 57.6 | 52.5 | 66.1 | 63.2 | 67.7 |
| Code generation | 61.0 | 45.6 | 42.9 | 52.7 | 56.4 | 70.4 |
| Math | 51.6 | 35.8 | 25.4 | 40.3 | 41.1 | 52.8 |
| Factual knowledge | 35.8 | 46.7 | 49.8 | 58.6 | 43.1 | 63.4 |
| Multilingual | 56.4 | 66.5 | 57.4 | 66.7 | 66.6 | 71.0 |
| Robustness | 61.1 | 38.4 | 40.6 | 51.0 | 64.5 | 69.3 |
Overall, the model with only 3.8B-param achieves a similar level of language understanding and reasoning ability as much larger models. However, it is still fundamentally limited by its size for certain tasks. The model simply does not have the capacity to store too much world knowledge, which can be seen for example with low performance on TriviaQA. However, we believe such weakness can be resolved by augmenting Phi-3-Mini with a search engine.
## Cross Platform Support
[ONNX runtime](https://onnxruntime.ai/blogs/accelerating-phi-3) now supports Phi-3 mini models across platforms and hardware.
Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA).
Along with DML, ONNX Runtime provides cross platform support for Phi3 mini across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3 Mini-128K-Instruct model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
* NVIDIA V100 or earlier generation GPUs: call AutoModelForCausalLM.from_pretrained() with attn_implementation="eager"
* Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128K](https://aka.ms/phi3-mini-128k-instruct-onnx)
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-mini-128k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
|
Lettria/grag-go-idf-mult_neg_rk_10-trial-0
|
Lettria
| 2025-03-02T22:28:26Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"tensorboard",
"onnx",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:2467",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:intfloat/multilingual-e5-base",
"base_model:quantized:intfloat/multilingual-e5-base",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-03-02T22:27:16Z |
---
base_model: intfloat/multilingual-e5-base
library_name: sentence-transformers
metrics:
- cosine_accuracy
- cosine_accuracy_threshold
- cosine_f1
- cosine_f1_threshold
- cosine_precision
- cosine_recall
- cosine_ap
- cosine_mcc
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:2467
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: 'Date de début: non précisée
Date de fin (clôture): non précisée
Date de début de la future campagne: non précisée'
sentences:
- '''Aménageurs privés'':entité|INTERVIENT_POUR|''Établissements publics territoriaux
franciliens'':entité'
- '''Commission permanente du Conseil régional'':groupe|DÉSIGNE|''Projets retenus'':__inferred__'
- '''Date de fin'':concept|EST|''non précisée'':__inferred__'
- source_sentence: 'Procédures et démarches: Deux
appels à projets sont lancés chaque année. Le candidat doit prendre contact avec
la direction de
l’aménagement durable du territoire avant la date de dépôt afin de préciser
son projet et de s’assurer de son éligibilité (via votre interlocuteur habituel
ou [email protected]). Le dossier de candidature est à remplir sur mesdemarches.iledefrance.fr. Un
jury d’élus et de personnalités qualifiées se réunit pour examiner les dossiers
et proposer des lauréats. L''attribution définitive des aides est votée en
commission permanente. Ce
dispositif d’aide peut être cumulable avec le Fonds Vert mis en place par l’Etat.
Les conditions d’éligibilité et d’intervention propres à chacun des dispositifs
ainsi que les contacts et liens utiles sont présentés dans le document "Tableau
AAP Friches 2023" en annexe de cette page.
Bénéficiaires: Collectivité ou institution - Autre (GIP, copropriété, EPA...),
Collectivité ou institution - Communes de 10 000 à 20 000 hab, Collectivité ou
institution - Communes de 2000 à 10 000 hab, Collectivité ou institution - Communes
de < 2000 hab, Collectivité ou institution - Communes de > 20 000 hab, Collectivité
ou institution - Département, Collectivité ou institution - EPT / Métropole du
Grand Paris, Collectivité ou institution - EPCI'
sentences:
- '''Fonds Vert'':programme|MIS_EN_PLACE_PAR|''Etat'':organisation'
- '''démonstration et initiation sportive'':activité|ENCADRÉ_PAR|''Ambassadrice
et Ambassadeur du Sport'':personne'
- '''Association'':entité|EST|''Bénéficiaires'':__inferred__'
- source_sentence: 'Procédures et démarches: Dépôt du dossier de candidature sur la
plateforme des aides régionales (mesdemarches.iledefrance.fr).
Bénéficiaires: Collectivité ou institution - Communes de < 2000 hab, Collectivité
ou institution - Communes de 2000 à 10 000 hab, Collectivité ou institution -
Communes de 10 000 à 20 000 hab, Collectivité ou institution - Communes de > 20
000 hab, Collectivité ou institution - EPCI, Collectivité ou institution - EPT
/ Métropole du Grand Paris, Collectivité ou institution - Département, Collectivité
ou institution - Bailleurs sociaux, Collectivité ou institution - Autre (GIP,
copropriété, EPA...)
Précision sure les bénéficiaires: Toutes les structures de droit public ou de
droit privé'
sentences:
- '''mesdemarches.iledefrance.fr'':plateforme|ACCEPTE_DEMANDE|''Collectivité ou
institution - Communes de 10 000 à 20 000 hab'':organisation'
- '''plateforme des aides régionales'':plateforme|CIBLE|''Collectivité ou institution
- EPT / Métropole du Grand Paris'':organisation'
- '''projets éligibles'':projet|AMÉLIORE_CONDITIONS_VIE|''résidents'':personne'
- source_sentence: 'Procédures et démarches: Les demandes d’aide devront être déposées
sur mesdemarches.iledefrance.fr, la plateforme des aides régionales.
Bénéficiaires: Particulier - Francilien, Professionnel - Culture, Professionnel
- Patrimoine, Association - Fondation, Association - ONG, Association - Régie
par la loi de 1901, Collectivité ou institution - Autre (GIP, copropriété, EPA...),
Collectivité ou institution - Bailleurs sociaux, Collectivité ou institution -
Communes de 10 000 à 20 000 hab, Collectivité ou institution - Communes de 2000
à 10 000 hab, Collectivité ou institution - Communes de < 2000 hab, Collectivité
ou institution - Communes de > 20 000 hab, Collectivité ou institution - Département,
Collectivité ou institution - EPCI, Collectivité ou institution - EPT / Métropole
du Grand Paris
Précision sure les bénéficiaires: Sont éligibles les propriétaires publics et
privés de maisons ou d’ateliers d’artistes.Les aménageurs mandatés par les collectivités
territoriales peuvent être bénéficiaires. Une convention de délégation de maîtrise
d’ouvrage doit avoir été signée entre la collectivité et l’aménageur.L’établissement
doit avoir fait l’objet d’un projet culturel et bénéficier d’une expertise scientifique.
La présence, le témoignage ou la trace tangibles de l’artiste ayant vécu sur place
doivent être attestés.Les établissements bénéficiant du label délivré par la DRAC
« Maisons des illustres » sont également concernés par le dispositif.'
sentences:
- '''mesdemarches.iledefrance.fr'':plateforme|ACCEPTE_DOSSIERS|''Collectivité ou
institution - Communes de 10 000 à 20 000 hab'':organisation'
- '''mesdemarches.iledefrance.fr'':plateforme|ACCEPTE_DEMANDE|''établissements avec
projet culturel et expertise scientifique'':bénéficiaire'
- '''plateforme des aides régionales'':plateforme|CIBLE|''Collectivité ou institution
- Communes de 2000 à 10 000 hab'':organisation'
- source_sentence: 'Procédures et démarches: Déposez sur mesdemarches.iledefrance.fr votre dossier
de demande de subvention présentant le projet de manière précise et comportant
toutes les pièces permettant l’instruction du dossier, réputé complet, par les
services de la Région. Après examen du dossier, la demande de subvention sera
soumise à la Commission permanente régionale pour délibération. Le versement
de la subvention est subordonné à la signature préalable d’une convention.
Bénéficiaires: Collectivité ou institution - Communes de 10 000 à 20 000 hab,
Collectivité ou institution - Communes de 2000 à 10 000 hab, Collectivité ou institution
- Communes de < 2000 hab, Collectivité ou institution - Communes de > 20 000 hab,
Collectivité ou institution - EPCI, Collectivité ou institution - EPT / Métropole
du Grand Paris
Précision sure les bénéficiaires: Pour les PEMR et aires de covoiturage : État,
Départements, EPCI, Communes, Syndicats mixtes,Ville de Paris.Pour les voies réservées :
État, Départements, EPCI.'
sentences:
- '''Date de début de la future campagne'':concept|EST|''non précisée'':__inferred__'
- '''prêt d''amorçage'':aide|FINANCE|''besoin en fonds de roulement'':concept'
- '''subvention'':__inferred__|SUBORDONNÉ_À|''convention'':document'
model-index:
- name: SentenceTransformer based on intfloat/multilingual-e5-base
results:
- task:
type: binary-classification
name: Binary Classification
dataset:
name: BinaryClassifEval
type: BinaryClassifEval
metrics:
- type: cosine_accuracy
value: 0.9983766233766234
name: Cosine Accuracy
- type: cosine_accuracy_threshold
value: 0.22912392020225525
name: Cosine Accuracy Threshold
- type: cosine_f1
value: 0.9991876523151909
name: Cosine F1
- type: cosine_f1_threshold
value: 0.22912392020225525
name: Cosine F1 Threshold
- type: cosine_precision
value: 1.0
name: Cosine Precision
- type: cosine_recall
value: 0.9983766233766234
name: Cosine Recall
- type: cosine_ap
value: 0.9999999999999999
name: Cosine Ap
- type: cosine_mcc
value: 0.0
name: Cosine Mcc
---
# SentenceTransformer based on intfloat/multilingual-e5-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) <!-- at revision 835193815a3936a24a0ee7dc9e3d48c1fbb19c55 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Lettria/grag-go-idf-mult_neg_rk_10-trial-0")
# Run inference
sentences = [
'Procédures et démarches: Déposez sur\xa0mesdemarches.iledefrance.fr\xa0votre\xa0dossier de demande de subvention présentant le projet de manière précise et comportant toutes les pièces permettant l’instruction du dossier, réputé complet, par les services de la Région. Après examen du dossier, la demande de subvention sera soumise à la Commission permanente régionale pour délibération. Le versement de la subvention est subordonné à la signature préalable d’une convention.\nBénéficiaires: Collectivité ou institution - Communes de 10 000 à 20 000 hab, Collectivité ou institution - Communes de 2000 à 10 000 hab, Collectivité ou institution - Communes de < 2000 hab, Collectivité ou institution - Communes de > 20 000 hab, Collectivité ou institution - EPCI, Collectivité ou institution - EPT / Métropole du Grand Paris\nPrécision sure les bénéficiaires: Pour les PEMR et aires de covoiturage : État, Départements, EPCI, Communes, Syndicats mixtes,Ville de Paris.Pour les voies réservées\xa0: État, Départements, EPCI.',
"'subvention':__inferred__|SUBORDONNÉ_À|'convention':document",
"'Date de début de la future campagne':concept|EST|'non précisée':__inferred__",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Binary Classification
* Dataset: `BinaryClassifEval`
* Evaluated with [<code>BinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.BinaryClassificationEvaluator)
| Metric | Value |
|:--------------------------|:--------|
| cosine_accuracy | 0.9984 |
| cosine_accuracy_threshold | 0.2291 |
| cosine_f1 | 0.9992 |
| cosine_f1_threshold | 0.2291 |
| cosine_precision | 1.0 |
| cosine_recall | 0.9984 |
| **cosine_ap** | **1.0** |
| cosine_mcc | 0.0 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 2,467 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | label |
|:--------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:-----------------------------|
| type | string | string | int |
| details | <ul><li>min: 26 tokens</li><li>mean: 191.64 tokens</li><li>max: 429 tokens</li></ul> | <ul><li>min: 18 tokens</li><li>mean: 31.2 tokens</li><li>max: 72 tokens</li></ul> | <ul><li>1: 100.00%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------|:---------------|
| <code>Type de project: L’excès de précipitations tout au long de l’année a conduit à une chute spectaculaire des rendements des céréales d’été et des protéagineux (blé, orge, pois, féverole, etc.) que produisent 90% des agriculteurs d’Île-de-France, historique grenier à blé du pays. Tributaires naturels du fleurissement des cultures, les apiculteurs professionnels de la région ont également souffert de ces dérèglements climatiques.La Région accompagne les exploitations concernées en leur apportant une aide exceptionnelle.</code> | <code>'excès de précipitations':phénomène|DIMINUE|'rendements des protéagineux':concept</code> | <code>1</code> |
| <code>Type de project: Dans le cadre de sa stratégie « Impact 2028 », la Région s’engage dans la défense de la souveraineté industrielle en renforçant son soutien à une industrie circulaire et décarbonée, porteuse d’innovations et créatrice d’emplois. PM'up Jeunes pousses industrielles soutient les projets d’implantation d’une première usine tournée vers la décarbonation, l’efficacité énergétique et la circularité des processus de production. Ces projets peuvent prendre l'une de ces formes : Une première unité de production industrielle, après une phase de prototypage,Une ligne pilote de production industrielle, en interne ou chez un tiers situé en Île-de-France, à condition que sa production soit destinée à de premières commercialisations,La transformation d’une unité de production pilote à une unité de production industrielle</code> | <code>'Région Île-de-France':organisation|soutient|'industrie décarbonée':concept</code> | <code>1</code> |
| <code>Procédures et démarches: Le dépôt des demandes de subvention se fait en ligne sur la plateforme régionale mesdemarches.iledefrance.fr : Session de dépôt unique pour les nouvelles demandes : du 30 septembre au 4 novembre 2024 (11 heures) pour des festivals qui se déroulent entre le 1er mars 2025 et le 28 février 2026 (vote à la CP de mars 2025). Pour les demandes de renouvellement, un mail est envoyé aux structures concernées par le service du Spectacle vivant en amont de chaque session de dépôt.<br>Bénéficiaires: Professionnel - Culture, Association - Fondation, Association - Régie par la loi de 1901, Association - ONG, Collectivité ou institution - Communes de 10 000 à 20 000 hab, Collectivité ou institution - Autre (GIP, copropriété, EPA...), Collectivité ou institution - Communes de 2000 à 10 000 hab, Collectivité ou institution - Communes de < 2000 hab, Collectivité ou institution - Communes de > 20 000 hab, Collectivité ou institution - Département, Collectivité ou institution - EPC...</code> | <code>'Collectivité ou institution - EPCI':bénéficiaire|PEUT_BÉNÉFICIER|'demandes de subvention':procédure</code> | <code>1</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### json
* Dataset: json
* Size: 616 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>label</code>
* Approximate statistics based on the first 616 samples:
| | sentence1 | sentence2 | label |
|:--------|:-------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:-----------------------------|
| type | string | string | int |
| details | <ul><li>min: 24 tokens</li><li>mean: 188.12 tokens</li><li>max: 394 tokens</li></ul> | <ul><li>min: 17 tokens</li><li>mean: 31.2 tokens</li><li>max: 133 tokens</li></ul> | <ul><li>1: 100.00%</li></ul> |
* Samples:
| sentence1 | sentence2 | label |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------|:---------------|
| <code>Type de project: Le programme propose des rencontres le samedi après-midi dans une université ou une grande école réputée, entre les professionnels bénévoles et les lycéens et collégiens sous la forme d'atelier thématiques. Ces moments de rencontre touchent à une grande multitude de domaines d’activités. L'objectif est de donner l’opportunité aux jeunes les plus enclavés d’échanger avec des intervenants professionnels aux parcours atypiques et inspirants. Les intervenants suscitent les ambitions et élargissent les perspectives des élèves.</code> | <code>'rencontres':événement|impliquent|'professionnels bénévoles':groupe</code> | <code>1</code> |
| <code>Précision sure les bénéficiaires: Communes,Établissements publics de coopération intercommunale (avec ou sans fiscalité propre),Établissements publics territoriaux franciliens,Départements,Aménageurs publics et privés (lorsque ces derniers interviennent à la demande ou pour le compte d'une collectivité précitée).</code> | <code>'Aménageurs privés':entité|INTERVIENT_POUR|'Départements':entité</code> | <code>1</code> |
| <code>Date de début: non précisée<br>Date de fin (clôture): non précisée<br>Date de début de la future campagne: non précisée</code> | <code>'Date de fin':concept|EST|'non précisée':__inferred__</code> | <code>1</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `gradient_accumulation_steps`: 4
- `learning_rate`: 0.00019886275647090204
- `num_train_epochs`: 20
- `lr_scheduler_type`: cosine
- `warmup_steps`: 237
- `bf16`: True
- `tf32`: True
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `hub_model_id`: Lettria/grag-go-idf-mult_neg_rk_10-trial-0
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 8
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 4
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 0.00019886275647090204
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 20
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 237
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: Lettria/grag-go-idf-mult_neg_rk_10-trial-0
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss | BinaryClassifEval_cosine_ap |
|:-------:|:------:|:-------------:|:---------------:|:---------------------------:|
| 0.6472 | 50 | 3.1169 | - | - |
| **1.0** | **78** | **-** | **0.2842** | **1.0** |
| 1.2848 | 100 | 1.0068 | - | - |
| 1.9320 | 150 | 0.9742 | - | - |
| 2.0 | 156 | - | 0.3391 | 1.0 |
| 2.5696 | 200 | 0.799 | - | - |
| 3.0 | 234 | - | 0.6928 | 1.0 |
| 3.2071 | 250 | 1.0793 | - | - |
| 3.8544 | 300 | 1.169 | - | - |
| 4.0 | 312 | - | 0.3700 | 1.0 |
| 4.4919 | 350 | 0.9574 | - | - |
| 5.0 | 390 | - | 0.4647 | 1.0000 |
| 5.1294 | 400 | 1.0247 | - | - |
| 5.7767 | 450 | 1.047 | - | - |
| 6.0 | 468 | - | 0.7817 | 1.0 |
| 6.4142 | 500 | 1.2192 | - | - |
| 7.0 | 546 | - | 0.7229 | 1.0 |
| 7.0518 | 550 | 2.7037 | - | - |
| 7.6990 | 600 | 1.0304 | - | - |
| 8.0 | 624 | - | 0.9250 | 1.0 |
| 8.3366 | 650 | 0.8447 | - | - |
| 8.9838 | 700 | 0.6554 | - | - |
| 9.0 | 702 | - | 0.6732 | 1.0 |
| 9.6214 | 750 | 0.7615 | - | - |
| 10.0 | 780 | - | 0.5164 | 1.0 |
| 10.2589 | 800 | 0.5472 | - | - |
| 10.9061 | 850 | 0.4748 | - | - |
| 11.0 | 858 | - | 0.6871 | 1.0 |
| 11.5437 | 900 | 0.5189 | - | - |
| 12.0 | 936 | - | 0.4734 | 1.0 |
| 12.1812 | 950 | 0.507 | - | - |
| 12.8285 | 1000 | 0.3718 | - | - |
| 13.0 | 1014 | - | 0.5409 | 1.0 |
| 13.4660 | 1050 | 0.2996 | - | - |
| 14.0 | 1092 | - | 0.5624 | 1.0 |
| 14.1036 | 1100 | 0.2858 | - | - |
| 14.7508 | 1150 | 0.22 | - | - |
| 15.0 | 1170 | - | 0.5213 | 1.0 |
| 15.3883 | 1200 | 0.1946 | - | - |
| 16.0 | 1248 | - | 0.4821 | 1.0 |
| 16.0259 | 1250 | 0.1602 | - | - |
| 16.6731 | 1300 | 0.1851 | - | - |
| 17.0 | 1326 | - | 0.5248 | 1.0 |
| 17.3107 | 1350 | 0.1522 | - | - |
| 17.9579 | 1400 | 0.1892 | - | - |
| 18.0 | 1404 | - | 0.5408 | 1.0 |
| 18.5955 | 1450 | 0.0836 | - | - |
| 19.0 | 1482 | - | 0.5384 | 1.0000 |
| 19.2330 | 1500 | 0.1314 | - | - |
| 19.7508 | 1540 | - | 0.2842 | 1.0000 |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.9
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.3.0
- Accelerate: 1.1.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
LHRuig/scottjssx
|
LHRuig
| 2025-03-02T22:27:36Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-03-02T22:27:17Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: scottjsx
---
# scottjsx
<Gallery />
## Model description
scottjsx lora
## Trigger words
You should use `scottjsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/scottjssx/tree/main) them in the Files & versions tab.
|
tjoab/latex_finetuned_earlystop
|
tjoab
| 2025-03-02T22:27:09Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-03-02T22:25:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/Tulu3-RAG-i1-GGUF
|
mradermacher
| 2025-03-02T22:25:00Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:ldsjmdy/Tulu3-Block-FT-RAG",
"base_model:ldsjmdy/Tulu3-RAG",
"base_model:quantized:ldsjmdy/Tulu3-RAG",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-03-02T20:10:38Z |
---
base_model: ldsjmdy/Tulu3-RAG
datasets:
- ldsjmdy/Tulu3-Block-FT-RAG
language:
- en
library_name: transformers
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ldsjmdy/Tulu3-RAG
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Tulu3-RAG-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Tulu3-RAG-i1-GGUF/resolve/main/Tulu3-RAG.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
zhangtemplar/ppo-SnowballTarget
|
zhangtemplar
| 2025-03-02T22:20:37Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] |
reinforcement-learning
| 2025-03-02T22:12:40Z |
---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: zhangtemplar/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
tp2422/ppo-LunarLander-v2
|
tp2422
| 2025-03-02T22:15:48Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-03-02T21:46:30Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -64.65 +/- 33.20
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Zakryah/whisper-tiny-try
|
Zakryah
| 2025-03-02T22:14:33Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"hu",
"dataset:mozilla-foundation/common_voice_17_0",
"base_model:openai/whisper-tiny",
"base_model:adapter:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"region:us"
] | null | 2025-03-02T16:57:04Z |
---
library_name: peft
language:
- hu
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_17_0
metrics:
- wer
model-index:
- name: Whisper Tiny Hu Test - Zakryah
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 17.0
type: mozilla-foundation/common_voice_17_0
config: hu
split: test
args: 'config: hu, split: test'
metrics:
- type: wer
value: 113.13022828434707
name: Wer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Tiny Hu Test - Zakryah
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 17.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2430
- Wer: 113.1302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.3058 | 0.3299 | 1000 | 1.2987 | 114.1613 |
| 1.3143 | 0.6598 | 2000 | 1.2633 | 112.6243 |
| 1.2969 | 0.9898 | 3000 | 1.2478 | 113.3247 |
| 1.2082 | 1.3197 | 4000 | 1.2430 | 113.1302 |
### Framework versions
- PEFT 0.14.1.dev0
- Transformers 4.50.0.dev0
- Pytorch 2.6.0+cu126
- Datasets 3.3.2
- Tokenizers 0.21.0
|
dinerburger/Anubis-Pro-105B-v1-AWQ
|
dinerburger
| 2025-03-02T22:14:24Z | 0 | 0 | null |
[
"safetensors",
"llama",
"en",
"base_model:TheDrummer/Anubis-Pro-105B-v1",
"base_model:quantized:TheDrummer/Anubis-Pro-105B-v1",
"license:other",
"4-bit",
"awq",
"region:us"
] | null | 2025-03-02T20:29:48Z |
---
license: other
language:
- en
base_model:
- TheDrummer/Anubis-Pro-105B-v1
base_model_relation: quantized
---
# Join our Discord! https://discord.gg/Nbv9pQ88Xb
## 4000 members strong 💪
---
# Anubis-Pro-105B-v1 AWQ
This is an AWQ-quantized version of [Anubis-Pro-105B-v1](https://huggingface.co/TheDrummer/Anubis-Pro-105B-v1) suitable for deployment on VLLM or other serverless tools. In truth I'm just trying to get it hosted while I finish my RunPod-serverless TabbyAPI implementation.
# Anubis Pro 105B v1 Original Model Card
[BeaverAI](https://huggingface.co/BeaverAI) proudly presents...
# Anubis Pro 105B v1 🐩

## Special Thanks
- Thank you to each and everyone who donated and subscribed in [Ko-Fi](https://ko-fi.com/thedrummer) to make our venture a little bit easier.
- I'm also recently unemployed. I am a Software Developer with 8 years of experience in Web, API, AI, and adapting to new tech and requirements. If you're hiring, feel free to reach out to me however.
## Supported Chat Template
- Llama 3 Chat for RP and Instruct
- Alpaca for Story Adventure
## Description
An upscaled version of Llama 3.3 70B with 50% more layers. Finetuned further to make use of its new layers.
> I'm really liking it so far. I haven't noticed any slop, the writing is good, very creative. (I know it's an overused term, but compared to other L3.3 finetunes, it really does feel so.). Definitely deserves a release. I've seen many unique generations in an hour that I've never seen before with other finetunes.
> yea it writes like abliterated 3.3, follows my intended writing style nicely
> I think overall this feels like a better Behemoth to me. It has a lot of elements of its emotional intelligence, ability to read between the lines and creativity, but without as much slop and with much better character adherence and prompt following. Also with fewer parameters, so it's easier to run too!
> After playing around with the new Anubis upscale for a few hours I've gotta say it's my new favourite model so far. It's a bit heavy to run, but welp.
> It's a great model and there's a notable intelligent jump over the base Anubis, and many other 70B Llamas I've tried. It mainly feels like an expanded ver of L3.3
> Anubus Pro 105B is fantastic, I pushed it to almost 80K context and it still was pretty resonable. 0.75 temp (temp last), 0.2 smoothing factor, 2 smoothing curve, 0.01 min-p, 4 dry_mult, 1 dry_allowed_length, 3 dry_base.
> I've been playing with it and it's surprisingly good! It has that... emotional intelligence that you get with 123B, but keeps that L3.3 prompt adherence that keeps characters from drifting. The best of both worlds. Even if it's a little slower, that's worth it. Running at temp 1, min_p 0.02, + Llamaception
## Links
- Original: https://huggingface.co/TheDrummer/Anubis-Pro-105B-v1
- GGUF: https://huggingface.co/TheDrummer/Anubis-Pro-105B-v1-GGUF
- iMatrix (recommended): https://huggingface.co/bartowski/TheDrummer_Anubis-Pro-105B-v1-GGUF
|
SrgSauce/ppo-LunarLander-v2
|
SrgSauce
| 2025-03-02T22:14:04Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-03-02T22:01:05Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 256.21 +/- 24.58
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
auxyus/4f2489b0-5528-4db4-b9da-fa3dac5ac11b
|
auxyus
| 2025-03-02T22:13:36Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM-135M",
"base_model:adapter:unsloth/SmolLM-135M",
"license:apache-2.0",
"region:us"
] | null | 2025-03-02T21:43:26Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM-135M
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4f2489b0-5528-4db4-b9da-fa3dac5ac11b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM-135M
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c5f511461c681123_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c5f511461c681123_train_data.json
type:
field_instruction: prompt_serial
field_output: hypothesis
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
ddp_timeout: 1800
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_checkpointing_kwargs:
use_reentrant: true
group_by_length: true
hub_model_id: auxyus/4f2489b0-5528-4db4-b9da-fa3dac5ac11b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 10
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 1600
micro_batch_size: 4
mlflow_experiment_name: /tmp/c5f511461c681123_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optim_args:
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-08
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
relora_prune_ratio: 0.9
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
saves_per_epoch: null
sequence_len: 512
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: acopia-grant
wandb_mode: online
wandb_name: f91de1f8-d551-4279-b873-14f7a53b5160
wandb_project: Gradients-On-165
wandb_run: your_name
wandb_runid: f91de1f8-d551-4279-b873-14f7a53b5160
warmup_steps: 50
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4f2489b0-5528-4db4-b9da-fa3dac5ac11b
This model is a fine-tuned version of [unsloth/SmolLM-135M](https://huggingface.co/unsloth/SmolLM-135M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0004
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.999,adam_epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 1600
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0004 | 1 | 1.5713 |
| 0.003 | 0.0424 | 100 | 0.0798 |
| 0.0013 | 0.0848 | 200 | 0.0139 |
| 0.0006 | 0.1273 | 300 | 0.0060 |
| 0.0008 | 0.1697 | 400 | 0.0054 |
| 0.0004 | 0.2121 | 500 | 0.0020 |
| 0.0005 | 0.2545 | 600 | 0.0036 |
| 0.0003 | 0.2970 | 700 | 0.0009 |
| 0.0006 | 0.3394 | 800 | 0.0016 |
| 0.0002 | 0.3818 | 900 | 0.0013 |
| 0.0002 | 0.4242 | 1000 | 0.0006 |
| 0.0005 | 0.4666 | 1100 | 0.0006 |
| 0.0003 | 0.5091 | 1200 | 0.0004 |
| 0.0003 | 0.5515 | 1300 | 0.0004 |
| 0.0002 | 0.5939 | 1400 | 0.0004 |
| 0.001 | 0.6363 | 1500 | 0.0004 |
| 0.0003 | 0.6788 | 1600 | 0.0004 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
SAi404/niko_v0.5
|
SAi404
| 2025-03-02T22:10:08Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-02T21:53:01Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
texanrangee/63dce765-7ff6-4983-aae6-7a21d870b3b9
|
texanrangee
| 2025-03-02T22:09:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-02T21:58:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
logasja/auramask-ensemble-willow
|
logasja
| 2025-03-02T22:08:20Z | 0 | 0 |
keras
|
[
"keras",
"adversarial",
"aesthetic",
"quality",
"filter",
"image-to-image",
"dataset:logasja/FDF",
"base_model:logasja/ArcFace",
"base_model:finetune:logasja/ArcFace",
"license:gpl-3.0",
"region:us"
] |
image-to-image
| 2025-03-02T22:03:14Z |
---
library_name: keras
widget:
- text: input
output:
url: ./assets/input.png
- text: target
output:
url: ./assets/target.png
- text: output
output:
url: ./assets/output.png
metrics:
- TopIQ-FR
- ArcFace Cosine Distance
- VGGFace2 Cosine Distance
datasets:
- logasja/FDF
pipeline_tag: image-to-image
tags:
- adversarial
- aesthetic
- quality
- filter
base_model:
- vnet
- logasja/ArcFace
- logasja/VGGFace
license: gpl-3.0
---
<Gallery />
Training logs [here](https://wandb.ai/spuds/auramask/runs/5ea0040a6b613278512f5a6c471fc568)
# Model Description
This model uses a modified vnet for 2D input/output implemented [here](https://github.com/logasja/keras3-unets) with the following configuration.
```json
{
"activation": "ReLU",
"batch_norm": false,
"filter_num": [
128,
256,
512,
1024,
1024
],
"n_labels": 3,
"output_activation": "tanh",
"pool": false,
"res_num_ini": 1,
"res_num_max": 3,
"unpool": false
}
```
```json
{
"alpha": 0.0001,
"batch": 32,
"epochs": 500,
"epsilon": 1,
"input": "(256, 256)",
"losses": {
"FEAT_ArcFace": {
"d": "cosine_similarity",
"f": "ArcFace",
"name": "FEAT_ArcFace",
"reduction": "sum_over_batch_size",
"threshold": 0.68,
"weight": 0.05
},
"FEAT_VGG-Face": {
"d": "cosine_similarity",
"f": "VGG-Face",
"name": "FEAT_VGG-Face",
"reduction": "sum_over_batch_size",
"threshold": 0.68,
"weight": 0.05
},
"IQASSIMC": {
"lower_better": false,
"name": "IQASSIMC",
"reduction": "sum_over_batch_size",
"weight": 0.5
},
"TopIQ": {
"full_ref": true,
"lower_better": false,
"name": "TopIQ",
"reduction": "sum_over_batch_size",
"score_range": "~0, ~1",
"weight": 0.5
}
},
"mixed_precision": true,
"optimizer": {
"amsgrad": false,
"beta_1": 0.9,
"beta_2": 0.999,
"clipnorm": null,
"clipvalue": null,
"ema_momentum": 0.99,
"ema_overwrite_frequency": null,
"epsilon": 1e-07,
"global_clipnorm": null,
"gradient_accumulation_steps": null,
"learning_rate": 9.999999747378752e-05,
"loss_scale_factor": null,
"name": "adamw",
"use_ema": false,
"weight_decay": 0.004
},
"seed": "BIIIIIGSTRETCH",
"testing": 0.01,
"training": 0.99
}
```
## Model Architecture Plot

|
ivolegrey/Sci-fi_Sketch_Style_SDXL
|
ivolegrey
| 2025-03-02T22:07:17Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] |
text-to-image
| 2025-03-02T22:04:54Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
rough sketch, messy lineart, monochromatic, flat color, full body, 1girl,
mature, beautiful face, long red hair, blunt fringe, bangs, detailed blue
eyes, curvy, thin waist, silver metal body, cyborg, dynamic pose, black
background
parameters:
negative_prompt: >-
worst quality, bad quality, jpeg artifacts, bad hands, bad finger, bad
anatomy, watermark, artist name
output:
url: images/Far_Future_Upscaled_00220_.png
- text: >-
rough sketch, messy lineart, monochromatic, flat color, skyline, overgrown
sci-fi city, flooded, gigantic buildings, beach, tropical, pink sunset,
planet with rings, starry sky
parameters:
negative_prompt: >-
worst quality, bad quality, jpeg artifacts, bad hands, bad finger, bad
anatomy, watermark, artist name
output:
url: images/Far_Future_Upscaled_00221_.png
- text: >-
rough sketch, messy lineart, monochromatic, flat color, skyline, overgrown
sci-fi city, flooded, gigantic buildings, beach, tropical, pink blue sunset,
planet with rings, starry sky
parameters:
negative_prompt: >-
worst quality, bad quality, jpeg artifacts, bad hands, bad finger, bad
anatomy, watermark, artist name
output:
url: images/Far_Future_Upscaled_00222_.png
- text: >-
rough sketch, messy lineart, monochromatic, flat color, skyline, overgrown
sci-fi city, flooded, gigantic buildings, beach, tropical, pink blue sunset,
planet with rings, starry sky
parameters:
negative_prompt: >-
worst quality, bad quality, jpeg artifacts, bad hands, bad finger, bad
anatomy, watermark, artist name
output:
url: images/Far_Future_Upscaled_00224_.png
- text: >-
rough sketch, messy lineart, monochromatic, flat color, giant monster,
claws, wings, black eyes, sharp teeth, eldritch style, skyline, sci-fi city,
gigantic buildings, in outer space, black hole, black background
parameters:
negative_prompt: >-
worst quality, bad quality, jpeg artifacts, bad hands, bad finger, bad
anatomy, watermark, artist name
output:
url: images/Far_Future_Upscaled_00226_.png
- text: >-
rough sketch, messy lineart, monochromatic, flat color, giant monster,
claws, wings, black eyes, sharp teeth, eldritch style, skyline, sci-fi city,
gigantic buildings, in outer space, black hole, black background
parameters:
negative_prompt: >-
worst quality, bad quality, jpeg artifacts, bad hands, bad finger, bad
anatomy, watermark, artist name
output:
url: images/Far_Future_Upscaled_00229_.png
- text: >-
rough sketch, messy lineart, monochromatic, flat color, giant monster,
claws, wings, black eyes, sharp teeth, eldritch style, skyline, sci-fi city,
gigantic buildings, in outer space, black hole, black background
parameters:
negative_prompt: >-
worst quality, bad quality, jpeg artifacts, bad hands, bad finger, bad
anatomy, watermark, artist name
output:
url: images/Far_Future_Upscaled_00230_.png
- text: >-
rough sketch, messy lineart, monochromatic, flat color, giant monster,
eldritch style, in outer space, black hole, black background
parameters:
negative_prompt: >-
worst quality, bad quality, jpeg artifacts, bad hands, bad finger, bad
anatomy, watermark, artist name
output:
url: images/Far_Future_Upscaled_00233_.png
- text: >-
rough sketch, messy lineart, monochromatic, flat color, giant monster,
eldritch style, in outer space, black hole, black background
parameters:
negative_prompt: >-
worst quality, bad quality, jpeg artifacts, bad hands, bad finger, bad
anatomy, watermark, artist name
output:
url: images/Far_Future_Upscaled_00234_.png
- text: >-
rough sketch, messy lineart, monochromatic, flat color, black giant monster,
eldritch type, wandering through a barren landscape, city, sci-fi, ruins,
rocks, mountains, storm clouds, dark sky
parameters:
negative_prompt: >-
worst quality, bad quality, jpeg artifacts, bad hands, bad finger, bad
anatomy, watermark, artist name
output:
url: images/Far_Future_Upscaled_00241_.png
- text: >-
rough sketch, messy lineart, monochromatic, flat color, jungle, trees,
vines, ruins, river, sand
parameters:
negative_prompt: >-
worst quality, bad quality, jpeg artifacts, bad hands, bad finger, bad
anatomy, watermark, artist name
output:
url: images/Far_Future_Upscaled_00249_.png
- text: >-
rough sketch, messy lineart, monochromatic, flat color, jungle, trees,
vines, ruins, river, sand
parameters:
negative_prompt: >-
worst quality, bad quality, jpeg artifacts, bad hands, bad finger, bad
anatomy, watermark, artist name
output:
url: images/Far_Future_Upscaled_00253_.png
- text: >-
rough sketch, messy lineart, monochromatic, flat color, jungle, trees,
vines, ruins, river, sand
parameters:
negative_prompt: >-
worst quality, bad quality, jpeg artifacts, bad hands, bad finger, bad
anatomy, watermark, artist name
output:
url: images/Far_Future_Upscaled_00254_.png
- text: >-
rough sketch, messy lineart, monochromatic, flat color, polar landscape,
snowy mountains, ice, river, ruins
parameters:
negative_prompt: >-
worst quality, bad quality, jpeg artifacts, bad hands, bad finger, bad
anatomy, watermark, artist name
output:
url: images/Far_Future_Upscaled_00255_.png
- text: >-
rough sketch, messy lineart, monochromatic, flat color, polar landscape,
snowy mountains, ice, river, ruins
parameters:
negative_prompt: >-
worst quality, bad quality, jpeg artifacts, bad hands, bad finger, bad
anatomy, watermark, artist name
output:
url: images/Far_Future_Upscaled_00256_.png
- text: >-
rough sketch, messy lineart, monochromatic, flat color, polar landscape,
snowy mountains, ice, river, ruins
parameters:
negative_prompt: >-
worst quality, bad quality, jpeg artifacts, bad hands, bad finger, bad
anatomy, watermark, artist name
output:
url: images/Far_Future_Upscaled_00259_.png
- text: >-
rough sketch, messy lineart, monochromatic, flat color, polar landscape,
snowy mountains, ice, river, ruins
parameters:
negative_prompt: >-
worst quality, bad quality, jpeg artifacts, bad hands, bad finger, bad
anatomy, watermark, artist name
output:
url: images/Far_Future_Upscaled_00260_.png
- text: >-
rough sketch, messy lineart, monochromatic, flat color, 1boy, mature man,
full body, close-up, short black hair, detailed eyes, cyborg, in a polar
landscape, snowy mountains, ice, river, ruins
parameters:
negative_prompt: >-
worst quality, bad quality, jpeg artifacts, bad hands, bad finger, bad
anatomy, watermark, artist name
output:
url: images/Far_Future_Upscaled_00263_.png
- text: >-
rough sketch, messy lineart, monochromatic, flat color, 1boy, mature man,
full body, close-up, short black hair, detailed eyes, cyborg, in a polar
landscape, snowy mountains, ice, river, ruins
parameters:
negative_prompt: >-
worst quality, bad quality, jpeg artifacts, bad hands, bad finger, bad
anatomy, watermark, artist name
output:
url: images/Far_Future_Upscaled_00267_.png
- text: >-
rough sketch, messy lineart, monochromatic, flat color, 1girl, mature woman,
upper body, close-up, shorter black hair, blush, beautiful face, detailed
eyes, soft smile, pale white skin, black v-shirt, cleavage, standing at the
beach, waves, ocean, mangrove, trees, overgrown ruins
parameters:
negative_prompt: >-
worst quality, bad quality, jpeg artifacts, bad hands, bad finger, bad
anatomy, watermark, artist name
output:
url: images/Far_Future_Upscaled_00273_.png
- text: >-
rough sketch, messy lineart, monochromatic, flat color, 1girl, mature woman,
upper body, close-up, shorter black hair, blush, beautiful face, detailed
eyes, soft smile, pale white skin, black v-shirt, cleavage, standing at the
beach, waves, ocean, mangrove, trees, overgrown ruins
parameters:
negative_prompt: >-
worst quality, bad quality, jpeg artifacts, bad hands, bad finger, bad
anatomy, watermark, artist name
output:
url: images/Far_Future_Upscaled_00275_.png
- text: >-
rough sketch, messy lineart, monochromatic, flat color, 1girl, white hair,
ponytail, aesthetic face, slender sci-fi body, sci-fi city in the
background, starry sky, nebula
parameters:
negative_prompt: >-
worst quality, bad quality, jpeg artifacts, bad hands, bad finger, bad
anatomy, watermark, artist name
output:
url: images/Far_Future_Upscaled_00283_.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
license: mit
---
# Sci-fi Sketch Style
<Gallery />
## Model description
This LoRA is designed to produce a rough pen sketch style, while also being able to generate futuristic places, natural environments, space, horrifying monsters, giant mechas and aesthetic people.
Usage
There is no trigger word, but "rough sketch", "monochromatic/desaturated", " messy line art" or "flat color" and "sci-fi" can help a lot.
The training was done on CivitAI and trained with over 500 images generated using Microsoft's Copilot Designer and auto generated captions with a few manual adjustments.
## Download model
Weights for this model are available in Safetensors format.
[Download](/ivolegrey/Sci-fi_Sketch_Style_SDXL/tree/main) them in the Files & versions tab.
|
jiebi/SIGIR-C2I-Dec
|
jiebi
| 2025-03-02T22:05:53Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2025-03-02T19:07:32Z |
---
base_model: mistralai/Mistral-7B-v0.1
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.13.2
|
mshipulin/sbert_faq_finetuned_v2
|
mshipulin
| 2025-03-02T22:05:06Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5240",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:ai-forever/sbert_large_nlu_ru",
"base_model:finetune:ai-forever/sbert_large_nlu_ru",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-03-02T22:04:22Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5240
- loss:CosineSimilarityLoss
base_model: ai-forever/sbert_large_nlu_ru
widget:
- source_sentence: В какие дни и часы проходит мероприятие?
sentences:
- 'Какие детали о месте проведения: адрес, вход, ориентиры?'
- Когда мероприятие открывается и закрывается?
- Укажите, пожалуйста, адрес и схему проезда на выставку.
- source_sentence: Сколько стоит билет и как пройти регистрацию?
sentences:
- Укажите, пожалуйста, время проведения выставки.
- Какова стоимость входа и какие документы нужны для регистрации?
- Какие цены на участие и что нужно для оформления?
- source_sentence: Какие каналы связи доступны для контакта с организаторами?
sentences:
- Есть ли чат или приложение для связи с организаторами?
- Куда подойти, чтобы задать вопрос организаторам?
- Будут ли организованы встречи с представителями крупных сетей?
- source_sentence: Какие инновационные продукты будут представлены?
sentences:
- Какие цены на участие и что нужно для оформления?
- Будут ли презентации доступны в виде PDF или видео?
- Какие технологии будут в центре внимания?
- source_sentence: 'Какие подробности о локации: адрес, сторона здания, вход?'
sentences:
- Куда подойти для получения бейджа и раздаточных материалов?
- 'Где находится площадка: уточните адрес и схему?'
- Как назначить встречу с потенциальными партнерами?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on ai-forever/sbert_large_nlu_ru
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [ai-forever/sbert_large_nlu_ru](https://huggingface.co/ai-forever/sbert_large_nlu_ru). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [ai-forever/sbert_large_nlu_ru](https://huggingface.co/ai-forever/sbert_large_nlu_ru) <!-- at revision ecc24eb563756a75cfbec32e1025825826589f7f -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("mshipulin/sbert_faq_finetuned_v2")
# Run inference
sentences = [
'Какие подробности о локации: адрес, сторона здания, вход?',
'Где находится площадка: уточните адрес и схему?',
'Куда подойти для получения бейджа и раздаточных материалов?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 5,240 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 6 tokens</li><li>mean: 10.89 tokens</li><li>max: 16 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 11.25 tokens</li><li>max: 20 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.9</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:------------------------------------------------------------------------------|:--------------------------------------------------------------------------|:-----------------|
| <code>Какие шансы для сотрудничества предлагаются на мероприятии?</code> | <code>Какие форматы взаимодействия с инвесторами будут предложены?</code> | <code>1.0</code> |
| <code>Какое место предусмотрено для выдачи бейджей?</code> | <code>Какое место предусмотрено для получения бейджей?</code> | <code>1.0</code> |
| <code>Сколько нужно заплатить за участие и какие шаги для регистрации?</code> | <code>Какие цены на участие и что нужно для оформления?</code> | <code>1.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.3.1
- Transformers: 4.47.0
- PyTorch: 2.5.1+cu121
- Accelerate: 1.2.1
- Datasets: 3.3.1
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
Okeifheron/sis1
|
Okeifheron
| 2025-03-02T22:03:00Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-02T21:37:09Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: sis1
---
# Sis1
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `sis1` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Okeifheron/sis1', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
IntelligentEstate/EdgeRunner-Baby_Phoenix-1B-IQ4_XS-GGUF
|
IntelligentEstate
| 2025-03-02T22:02:59Z | 0 | 1 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"dataset:IntelligentEstate/The_Key",
"base_model:Youlln/ECE-PRYMMAL-YL-1B-SLERP-V2",
"base_model:quantized:Youlln/ECE-PRYMMAL-YL-1B-SLERP-V2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-03-02T11:45:33Z |
---
library_name: transformers
license: apache-2.0
base_model: Youlln/ECE-PRYMMAL-YL-1B-SLERP-V2
tags:
- llama-cpp
datasets:
- IntelligentEstate/The_Key
---
# IntelligentEstate/EdgeRunner-Baby_Phoenix-1B-IQ4_XS-GGUF
A small functional edge model which leads the field in excellance on devices and in swarms.
This model was converted to GGUF format from [`Youlln/ECE-PRYMMAL-YL-1B-SLERP-V2`](https://huggingface.co/Youlln/ECE-PRYMMAL-YL-1B-SLERP-V2) using llama.cpp
Refer to the [original model card](https://huggingface.co/Youlln/ECE-PRYMMAL-YL-1B-SLERP-V2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
|
texanrangee/69d95b8b-b5b0-4939-b01e-507dbee30e4a
|
texanrangee
| 2025-03-02T21:59:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-02T21:53:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Navid-AI/Yehia-7B-preview
|
Navid-AI
| 2025-03-02T21:58:02Z | 53 | 3 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ar",
"en",
"base_model:ALLaM-AI/ALLaM-7B-Instruct-preview",
"base_model:finetune:ALLaM-AI/ALLaM-7B-Instruct-preview",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-27T19:50:53Z |
---
language:
- ar
- en
base_model:
- ALLaM-AI/ALLaM-7B-Instruct-preview
pipeline_tag: text-generation
library_name: transformers
---
# Yehia: A Simple (nice to talk to) Arabic Model
<center>
<img src="https://cdn-uploads.huggingface.co/production/uploads/6116d0584ef9fdfbf45dc4d9/1OUwFm2hWBAHLCVvh2JkG.png" width="75%">
</center>
## 🤔 What is Yehia?
Yehia is a 7-billion-parameter language model built to be more than just a tool—it’s a companion. Based on ALLaM-AI’s [ALLaM-7B-Instruct-preview](https://huggingface.co/ALLaM-AI/ALLaM-7B-Instruct-preview), Yehia is designed to offer thoughtful, kind, and helpful conversations in both Arabic and English.
[You can chat with Yehia from here 👋](https://huggingface.co/spaces/Navid-AI/Yehia-7B-preview)
### 📰 Interesting News
As of **2/3/2025**, Yehia is the best Arabic model on [AraGen-Leaderboard](https://huggingface.co/spaces/inceptionai/AraGen-Leaderboard) between the sizes of 0.5B up to 25B 🔥
<img src="https://cdn-uploads.huggingface.co/production/uploads/6116d0584ef9fdfbf45dc4d9/58HX7laDAJCkWOTZm_KY7.png">
## 🛠️ How Yehia was made?
Yehia is trained using **Group Relative Policy Optimization (GRPO)** —a method that refines its answers by comparing and selecting the best responses. Its development follows the **3C3H** metric, prioritizing:
- **Correctness ✅:** Accurate information to build trust.
- **Completeness 📚:** Full, well-rounded answers.
- **Conciseness ✂️:** Clear, to-the-point responses.
- **Helpfulness 🤝:** Always aiming to support and uplift.
- **Honesty 💬:** Transparent, straightforward communication.
- **Harmlessness ❤️:** Promoting kindness and safety.
And the Judge model of our answer was none other than `claude-sonnet-3.5` 🔍
## 🚀 Getting Started
To start using Yehia, you can easily load the model with the `transformers` library:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_name = "Navid-AI/Yehia-7B-preview"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, attn_implementation="flash_attention_2", device_map="auto")
messages = [
{"role": "system", "content": "أنت يحيى، ذكاءٌ اصطناعيٌّ طورته شركة 'نفيد'، متخصصٌ في التفكير المنطقي والتحليل الدقيق. مهمتك إلهام المستخدمين ودعمهم في رحلتهم نحو التعلّم، النمو، وتحقيق أهدافهم من خلال تقديم حلولٍ ذكيةٍ ومدروسة."},
{"role": "user", "content": "مرحباً يا يحيى! كيف حالك اليوم؟"}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt", return_dict=True).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
**Note:** If `flash_attention_2` is giving you any problems just remove it.
## 🌟 What Can Yehia Do?
- **Explain Concepts 💡:** Break down educational topics in Arabic to help learners understand easily.
- **Engage in Conversations 🗣️:** Offer friendly and supportive chats that uplift users.
- **Promote Learning 📖:** Encourage curiosity and provide knowledge in an accessible way.
Yehia shines in conversations that feel personal and uplifting, always striving to improve.
## 💭 Remember
Yehia’s name means *“God is gracious”* in Arabic—reflecting its mission to bring grace and connection to every interaction. Whether you’re a student, creator, or just curious, Yehia is here to brighten your day.
## 📌 Citation
If you would like to cite Yehia in your work, please use the following BibTeX entry:
```
@misc{yehia2025,
title={Yehia 7B Preview},
author={Navid-AI},
year={2025},
howpublished={\url{https://huggingface.co/Navid-AI/Yehia-7B-preview}}
}
```
|
zacsmith/mrzacsmith
|
zacsmith
| 2025-03-02T21:55:57Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-02T21:40:31Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MRZACSMITH
---
# Mrzacsmith
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MRZACSMITH` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('zacsmith/mrzacsmith', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
TheBlueObserver/Llama-3.2-3B-Instruct-huatuo-r32-a32-epoch1
|
TheBlueObserver
| 2025-03-02T21:55:26Z | 0 | 0 | null |
[
"safetensors",
"region:us"
] | null | 2025-03-02T01:43:41Z |
# TheBlueObserver/Llama-3.2-3B-Instruct-huatuo-r32-a32-epoch1 Model Card
## LoRA Details
- **Rank**: 32
- **Alpha**: 32
## Training Details
- **Datasets**: huatuo_reasoning
- **Limit**: -1
- **Max Steps**: default
- **Epochs**: 1
|
pymmdrza/TPT_v1
|
pymmdrza
| 2025-03-02T21:48:16Z | 0 | 0 | null |
[
"trade",
"auto",
"crypto",
"trading",
"binance",
"kucoin",
"bitcoin",
"future",
"margin",
"bar-trade",
"future-trade",
"levrege-trade",
"crypto-trade",
"image-text-to-text",
"en",
"fa",
"dataset:pymmdrza/datatrade",
"base_model:meta-llama/Llama-3.3-70B-Instruct",
"base_model:finetune:meta-llama/Llama-3.3-70B-Instruct",
"license:mit",
"region:us"
] |
image-text-to-text
| 2025-02-22T10:50:55Z |
---
license: mit
datasets:
- pymmdrza/datatrade
language:
- en
- fa
base_model:
- meta-llama/Llama-3.3-70B-Instruct
pipeline_tag: image-text-to-text
tags:
- trade
- auto
- crypto
- trading
- binance
- kucoin
- bitcoin
- future
- margin
- bar-trade
- future-trade
- levrege-trade
- crypto-trade
---
# Trade Professional Trading
Take your professional trading in real time to the peak of profitability with the TPT model.
- The highest possible profit from each trade according to the training and strategies you have learned.
- Professional futures trading on all exchanges with appropriate and selected coefficients
- Checking the volume and recent events of the target market
- Not providing forecast and trading suggestions if you are not sure of making a profit
|
mradermacher/PirateShip-ChatML-4x12B-i1-GGUF
|
mradermacher
| 2025-03-02T21:46:24Z | 566 | 1 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:nbeerbower/PirateShip-ChatML-4x12B",
"base_model:quantized:nbeerbower/PirateShip-ChatML-4x12B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-02-25T07:25:13Z |
---
base_model: nbeerbower/PirateShip-ChatML-4x12B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/nbeerbower/PirateShip-ChatML-4x12B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-i1-GGUF/resolve/main/PirateShip-ChatML-4x12B.i1-IQ1_S.gguf) | i1-IQ1_S | 8.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-i1-GGUF/resolve/main/PirateShip-ChatML-4x12B.i1-IQ1_M.gguf) | i1-IQ1_M | 9.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-i1-GGUF/resolve/main/PirateShip-ChatML-4x12B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 10.6 | |
| [GGUF](https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-i1-GGUF/resolve/main/PirateShip-ChatML-4x12B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 11.7 | |
| [GGUF](https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-i1-GGUF/resolve/main/PirateShip-ChatML-4x12B.i1-IQ2_S.gguf) | i1-IQ2_S | 12.0 | |
| [GGUF](https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-i1-GGUF/resolve/main/PirateShip-ChatML-4x12B.i1-IQ2_M.gguf) | i1-IQ2_M | 13.1 | |
| [GGUF](https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-i1-GGUF/resolve/main/PirateShip-ChatML-4x12B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 13.5 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-i1-GGUF/resolve/main/PirateShip-ChatML-4x12B.i1-Q2_K.gguf) | i1-Q2_K | 14.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-i1-GGUF/resolve/main/PirateShip-ChatML-4x12B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 15.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-i1-GGUF/resolve/main/PirateShip-ChatML-4x12B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 16.1 | |
| [GGUF](https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-i1-GGUF/resolve/main/PirateShip-ChatML-4x12B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 17.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-i1-GGUF/resolve/main/PirateShip-ChatML-4x12B.i1-IQ3_S.gguf) | i1-IQ3_S | 17.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-i1-GGUF/resolve/main/PirateShip-ChatML-4x12B.i1-IQ3_M.gguf) | i1-IQ3_M | 17.3 | |
| [GGUF](https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-i1-GGUF/resolve/main/PirateShip-ChatML-4x12B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 18.8 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-i1-GGUF/resolve/main/PirateShip-ChatML-4x12B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 20.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-i1-GGUF/resolve/main/PirateShip-ChatML-4x12B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 20.9 | |
| [GGUF](https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-i1-GGUF/resolve/main/PirateShip-ChatML-4x12B.i1-Q4_0.gguf) | i1-Q4_0 | 22.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-i1-GGUF/resolve/main/PirateShip-ChatML-4x12B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 22.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-i1-GGUF/resolve/main/PirateShip-ChatML-4x12B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 23.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-i1-GGUF/resolve/main/PirateShip-ChatML-4x12B.i1-Q4_1.gguf) | i1-Q4_1 | 24.4 | |
| [GGUF](https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-i1-GGUF/resolve/main/PirateShip-ChatML-4x12B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 26.8 | |
| [GGUF](https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-i1-GGUF/resolve/main/PirateShip-ChatML-4x12B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 27.6 | |
| [GGUF](https://huggingface.co/mradermacher/PirateShip-ChatML-4x12B-i1-GGUF/resolve/main/PirateShip-ChatML-4x12B.i1-Q6_K.gguf) | i1-Q6_K | 31.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
surafelabebe/mms-tts-amh
|
surafelabebe
| 2025-03-02T21:45:57Z | 57 | 0 |
transformers
|
[
"transformers",
"safetensors",
"vits",
"text-to-audio",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2025-02-25T00:36:46Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Lod34/Animator2D-v1
|
Lod34
| 2025-03-02T21:44:21Z | 0 | 0 |
transformers
|
[
"transformers",
"text-to-image",
"en",
"dataset:pawkanarek/spraix_1024",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-image
| 2025-01-27T14:05:53Z |
---
license: mit
datasets:
- pawkanarek/spraix_1024
language:
- en
base_model:
- google-bert/bert-base-uncased
metrics:
- mse
library_name: transformers
pipeline_tag: text-to-image
new_version: Lod34/Animator2D-v2
---
# 🎨 Animator2D
Animator2D is an AI-powered model designed to generate pixel-art sprite animations from textual descriptions. This model leverages a BERT-based text encoder to extract textual features and a convolutional generative network to create animated sprites. The goal is to provide game developers and artists with a tool that can bring character concepts to life with minimal effort.
## 🛠️ Model Overview
- **Name:** Animator2D
- **Input:**
- Character description
- Number of animation frames
- Character action
- Viewing direction
- **Output:** Animated sprite sheet in image format
## 📦 Dataset
The model was trained using the [spraix\_1024](https://huggingface.co/datasets/pawkanarek/spraix_1024) dataset, which contains animated sprites with detailed textual descriptions. This dataset serves as a foundation for training the model to generate high-quality, relevant sprites based on textual inputs.
## 🚀 Model Versions
Over time, several iterations of Animator2D have been developed, each improving on the previous version with different training strategies and hyperparameters. Below is a chronological overview of the versions created so far:
| Model Version | Description |
|----------------------|-------------|
| **Animator2D-v1** | The first full version developed in this project, utilizing a structured training approach with BERT for text encoding and a convolutional generator for sprite creation. |
| **Animator2D-mini-10e** | A simplified version trained with only 10 epochs, batch size of 8, learning rate of 1e-4, and image size of 64x64. |
| **Animator2D-mini-100e** | An extension of the mini-10e version, trained for 100 epochs for improved performance. |
| **Animator2D-mini-250e** | A more refined version with 250 epochs, batch size increased to 16, learning rate of 2e-4, and image resolution of 128x128. |
| **Animator2D-v2 (In Development)** | A new version being built from scratch with an entirely redesigned training process, aiming for better animation quality and efficiency. |
## 🔮 Future Goals
This is just the first iteration of Animator2D. Future updates will focus on refining and expanding its capabilities:
- **Multiple Output Formats**: Currently, the model generates a single sprite sheet. Future updates will enable exporting animations in various formats, including folders with individual frames, GIFs, and videos.
- **Frame Input Optimization**: The number of frames is currently manually defined. Improvements will include a more intuitive system that considers FPS and actual animation duration.
- **Model Refinement**: The current model is in an early stage. Future improvements will enhance sprite generation consistency and quality by optimizing the architecture and training dataset.
- **Sprite Size Customization**: A new input will allow users to specify the character height in pixels, dynamically adjusting the sprite’s artistic style. This will ensure greater flexibility, allowing for different art styles (e.g., Pokémon vs. Metal Slug aesthetics).
---
Animator2D is an exciting step toward AI-assisted sprite animation generation, and future versions will continue to push the boundaries of what’s possible in pixel-art automation! 🚀🎮
|
flemenn/test-lora
|
flemenn
| 2025-03-02T21:44:20Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-02T21:32:40Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Test Lora
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('flemenn/test-lora', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
ReadyArt/Progenitor-X-LLaMa-70B_EXL2_4.0bpw_H8
|
ReadyArt
| 2025-03-02T21:44:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2406.11617",
"base_model:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:merge:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1",
"base_model:Sao10K/70B-L3.3-Cirrus-x1",
"base_model:merge:Sao10K/70B-L3.3-Cirrus-x1",
"base_model:Sao10K/L3.1-70B-Hanami-x1",
"base_model:merge:Sao10K/L3.1-70B-Hanami-x1",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:TareksLab/UL3.3-Nemo-X80-BASE-70B",
"base_model:merge:TareksLab/UL3.3-Nemo-X80-BASE-70B",
"base_model:TheDrummer/Anubis-70B-v1",
"base_model:merge:TheDrummer/Anubis-70B-v1",
"license:llama3.3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"exl2",
"region:us"
] |
text-generation
| 2025-03-02T21:38:14Z |
---
base_model:
- Sao10K/70B-L3.3-Cirrus-x1
- SicariusSicariiStuff/Negative_LLAMA_70B
- Sao10K/L3.1-70B-Hanami-x1
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
- TheDrummer/Anubis-70B-v1
- TareksLab/UL3.3-Nemo-X80-BASE-70B
library_name: transformers
tags:
- mergekit
- merge
license: llama3.3
---
Progenitor, but with another base. A custom merge I made to attempt to uncensor it further.
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear DELLA](https://arxiv.org/abs/2406.11617) merge method using [TareksLab/UL3.3-Nemo-X80-BASE-70B](https://huggingface.co/TareksLab/UL3.3-Nemo-X80-BASE-70B) as a base.
### Models Merged
The following models were included in the merge:
* [Sao10K/70B-L3.3-Cirrus-x1](https://huggingface.co/Sao10K/70B-L3.3-Cirrus-x1)
* [SicariusSicariiStuff/Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B)
* [Sao10K/L3.1-70B-Hanami-x1](https://huggingface.co/Sao10K/L3.1-70B-Hanami-x1)
* [EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1](https://huggingface.co/EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1)
* [TheDrummer/Anubis-70B-v1](https://huggingface.co/TheDrummer/Anubis-70B-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Sao10K/L3.1-70B-Hanami-x1
parameters:
weight: 0.20
density: 0.7
- model: Sao10K/70B-L3.3-Cirrus-x1
parameters:
weight: 0.20
density: 0.7
- model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
weight: 0.20
density: 0.7
- model: TheDrummer/Anubis-70B-v1
parameters:
weight: 0.20
density: 0.7
- model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
parameters:
weight: 0.20
density: 0.7
merge_method: della_linear
base_model: TareksLab/UL3.3-Nemo-X80-BASE-70B
parameters:
epsilon: 0.2
lambda: 1.1
dtype: bfloat16
tokenizer:
source: TareksLab/UL3.3-Nemo-X80-BASE-70B
```
|
baby-dev/a3ece6ee-3cb2-4213-972b-3712e9f57c6a
|
baby-dev
| 2025-03-02T21:42:20Z | 0 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:unsloth/Llama-3.1-Storm-8B",
"base_model:adapter:unsloth/Llama-3.1-Storm-8B",
"region:us"
] | null | 2025-03-02T21:42:08Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/Llama-3.1-Storm-8B
model-index:
- name: baby-dev/a3ece6ee-3cb2-4213-972b-3712e9f57c6a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baby-dev/a3ece6ee-3cb2-4213-972b-3712e9f57c6a
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2512
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
daniel40/14c58f59-dcb4-4c05-b121-614c5c33fa8a
|
daniel40
| 2025-03-02T21:41:31Z | 0 | 0 |
peft
|
[
"peft",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna2-13b-hf",
"base_model:adapter:heegyu/WizardVicuna2-13b-hf",
"region:us"
] | null | 2025-03-02T21:41:14Z |
---
library_name: peft
tags:
- generated_from_trainer
base_model: heegyu/WizardVicuna2-13b-hf
model-index:
- name: daniel40/14c58f59-dcb4-4c05-b121-614c5c33fa8a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# daniel40/14c58f59-dcb4-4c05-b121-614c5c33fa8a
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2932
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
sin3/sin3flux1beta
|
sin3
| 2025-03-02T21:40:18Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-02T21:15:27Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Sindre
---
# Sin3Flux1Beta
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Sindre` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('sin3/sin3flux1beta', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
NguyenDuyPhuc/Meta-Llama-3-8B
|
NguyenDuyPhuc
| 2025-03-02T21:38:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"endpoints_compatible",
"region:us"
] | null | 2025-03-02T21:36:30Z |
---
library_name: transformers
model_name: Meta-Llama-3-8B
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Meta-Llama-3-8B
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="NguyenDuyPhuc/Meta-Llama-3-8B", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.49.0
- Pytorch: 2.6.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
DevQuasar/prithivMLmods.Eridanus-Opus-14B-r999-GGUF
|
DevQuasar
| 2025-03-02T21:34:48Z | 0 | 0 | null |
[
"gguf",
"text-generation",
"base_model:prithivMLmods/Eridanus-Opus-14B-r999",
"base_model:quantized:prithivMLmods/Eridanus-Opus-14B-r999",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-03-02T19:28:18Z |
---
base_model:
- prithivMLmods/Eridanus-Opus-14B-r999
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [prithivMLmods/Eridanus-Opus-14B-r999](https://huggingface.co/prithivMLmods/Eridanus-Opus-14B-r999)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
flyingbugs/OpenR1-Qwen-7B-SFT
|
flyingbugs
| 2025-03-02T21:34:33Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:open-r1/OpenR1-Math-220k",
"base_model:Qwen/Qwen2.5-Math-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-Math-7B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-02-25T00:39:06Z |
---
base_model: Qwen/Qwen2.5-Math-7B-Instruct
datasets: open-r1/OpenR1-Math-220k
library_name: transformers
model_name: OpenR1-Qwen-7B-SFT
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for OpenR1-Qwen-7B-SFT
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="flyingbugs/OpenR1-Qwen-7B-SFT", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jjh233/huggingface/runs/a2fjki6v)
This model was trained with SFT.
## Evaluation Results
| Model | MATH-500 | AIME24 | GPQA-diamond | MMLU (Generation) |BBH (Generation) | IFEVAL (Generation) |MMLU (Logits classification) |
| --- | --- | --- |--- | --- | --- | --- |--- |
| Qwen2.5-Math-7B-instruct| 82.6 | 3.3| 34.8 | 23.2 | 18.9 | | 0.4173|
| Flyingbugs-OpenR1-Qwen-7B | 88.8 | 43.3 | 37.8 | 23.1 | 18.9 | |0.3091|
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0.dev0
- Pytorch: 2.5.1
- Datasets: 3.3.0
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
octave86/ppo-LunarLander-v2
|
octave86
| 2025-03-02T21:28:50Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-03-02T21:28:33Z |
---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.12 +/- 45.67
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
JFernandoGRE/llama31_8b_augmenteddemocracy_dpo_participants_50_critsupport
|
JFernandoGRE
| 2025-03-02T21:25:30Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-02T21:20:00Z |
---
library_name: transformers
tags:
- unsloth
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
braginpawel/deepseek-14b-orpo-945ex-6ep-6th_iteration-merged
|
braginpawel
| 2025-03-02T21:21:14Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"orpo",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-02T21:14:45Z |
---
base_model: unsloth/deepseek-r1-distill-qwen-14b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
- orpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** braginpawel
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-qwen-14b-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
RollinTwinz/Llama-11B-Vision-4Bit
|
RollinTwinz
| 2025-03-02T21:16:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mllama",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-03-02T21:02:20Z |
---
base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mllama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** RollinTwinz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit
This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
leenaalsaghir/my-allam-7b
|
leenaalsaghir
| 2025-03-02T21:15:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-02T21:11:24Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
texanrangee/3840be21-4d6d-44ff-9b6f-10b724a7dc70
|
texanrangee
| 2025-03-02T21:13:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-02T13:38:00Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
texanrangee/5add0487-c889-4e13-bd31-2d545e74f822
|
texanrangee
| 2025-03-02T21:12:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-02T13:38:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Romain-XV/15208441-5456-4a5e-960e-9202f616bff1
|
Romain-XV
| 2025-03-02T21:10:01Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"falcon",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:fxmarty/really-tiny-falcon-testing",
"base_model:adapter:fxmarty/really-tiny-falcon-testing",
"license:mit",
"region:us"
] | null | 2025-03-02T20:40:19Z |
---
library_name: peft
license: mit
base_model: fxmarty/really-tiny-falcon-testing
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 15208441-5456-4a5e-960e-9202f616bff1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: fxmarty/really-tiny-falcon-testing
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8ac7c8ef96d8dd3c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8ac7c8ef96d8dd3c_train_data.json
type:
field_instruction: prompt
field_output: gold_standard_solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 4
eval_max_new_tokens: 128
eval_steps: 150
eval_table_size: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: false
hub_model_id: Romain-XV/15208441-5456-4a5e-960e-9202f616bff1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.3
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lora_target_modules:
- q_proj
- k_proj
- v_proj
- o_proj
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 8280
micro_batch_size: 2
mlflow_experiment_name: /tmp/8ac7c8ef96d8dd3c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 150
sequence_len: 2048
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: eca76530-5739-4ead-aa78-b02ae33f547b
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: eca76530-5739-4ead-aa78-b02ae33f547b
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 15208441-5456-4a5e-960e-9202f616bff1
This model is a fine-tuned version of [fxmarty/really-tiny-falcon-testing](https://huggingface.co/fxmarty/really-tiny-falcon-testing) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.9624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 8280
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 44.3547 | 0.0001 | 1 | 11.0852 |
| 43.9869 | 0.0182 | 150 | 11.0044 |
| 44.0157 | 0.0363 | 300 | 10.9971 |
| 43.9706 | 0.0545 | 450 | 10.9922 |
| 43.9953 | 0.0727 | 600 | 10.9876 |
| 43.9877 | 0.0908 | 750 | 10.9851 |
| 43.9662 | 0.1090 | 900 | 10.9824 |
| 43.9459 | 0.1271 | 1050 | 10.9807 |
| 43.9484 | 0.1453 | 1200 | 10.9787 |
| 43.9718 | 0.1635 | 1350 | 10.9776 |
| 43.9164 | 0.1816 | 1500 | 10.9762 |
| 44.003 | 0.1998 | 1650 | 10.9752 |
| 43.8806 | 0.2180 | 1800 | 10.9740 |
| 43.8483 | 0.2361 | 1950 | 10.9738 |
| 43.9305 | 0.2543 | 2100 | 10.9724 |
| 43.9255 | 0.2725 | 2250 | 10.9719 |
| 43.8895 | 0.2906 | 2400 | 10.9708 |
| 43.9281 | 0.3088 | 2550 | 10.9701 |
| 43.942 | 0.3270 | 2700 | 10.9695 |
| 43.8534 | 0.3451 | 2850 | 10.9687 |
| 43.9522 | 0.3633 | 3000 | 10.9681 |
| 43.9026 | 0.3814 | 3150 | 10.9676 |
| 43.9441 | 0.3996 | 3300 | 10.9674 |
| 43.9231 | 0.4178 | 3450 | 10.9669 |
| 43.9591 | 0.4359 | 3600 | 10.9668 |
| 43.9474 | 0.4541 | 3750 | 10.9661 |
| 43.872 | 0.4723 | 3900 | 10.9665 |
| 43.9201 | 0.4904 | 4050 | 10.9655 |
| 43.9196 | 0.5086 | 4200 | 10.9655 |
| 43.8986 | 0.5268 | 4350 | 10.9653 |
| 43.9105 | 0.5449 | 4500 | 10.9646 |
| 43.891 | 0.5631 | 4650 | 10.9645 |
| 43.8892 | 0.5813 | 4800 | 10.9643 |
| 43.9004 | 0.5994 | 4950 | 10.9642 |
| 43.8778 | 0.6176 | 5100 | 10.9639 |
| 43.8786 | 0.6357 | 5250 | 10.9636 |
| 43.8783 | 0.6539 | 5400 | 10.9639 |
| 43.9203 | 0.6721 | 5550 | 10.9635 |
| 43.911 | 0.6902 | 5700 | 10.9633 |
| 43.8945 | 0.7084 | 5850 | 10.9631 |
| 43.7851 | 0.7266 | 6000 | 10.9632 |
| 43.8642 | 0.7447 | 6150 | 10.9630 |
| 43.9262 | 0.7629 | 6300 | 10.9628 |
| 43.941 | 0.7811 | 6450 | 10.9627 |
| 43.9075 | 0.7992 | 6600 | 10.9627 |
| 43.8584 | 0.8174 | 6750 | 10.9627 |
| 43.9384 | 0.8356 | 6900 | 10.9627 |
| 43.914 | 0.8537 | 7050 | 10.9626 |
| 43.9203 | 0.8719 | 7200 | 10.9625 |
| 43.9275 | 0.8900 | 7350 | 10.9625 |
| 43.9335 | 0.9082 | 7500 | 10.9624 |
| 43.8368 | 0.9264 | 7650 | 10.9624 |
| 43.8976 | 0.9445 | 7800 | 10.9624 |
| 43.9125 | 0.9627 | 7950 | 10.9625 |
| 43.9119 | 0.9809 | 8100 | 10.9624 |
| 43.9198 | 0.9990 | 8250 | 10.9624 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
emii19/proyecto
|
emii19
| 2025-03-02T21:09:28Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-02T21:09:27Z |
---
license: apache-2.0
---
|
b13nb3n/solid_snake_12
|
b13nb3n
| 2025-03-02T21:01:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-03-02T17:49:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
codermert/ozgeemm_fluxxx
|
codermert
| 2025-03-02T20:59:50Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-03-02T20:12:07Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Ozgeemm_Fluxxx
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('prithivMLmods/Canopus-LoRA-Flux-UltraRealism-2.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('codermert/ozgeemm_fluxxx', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
nktntp/llama-3-1-8b-with-adapters
|
nktntp
| 2025-03-02T20:59:43Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-02T15:41:52Z |
---
license: apache-2.0
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.