modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-25 12:29:04
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 495
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-25 12:27:57
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
c707851/4comic | c707851 | 2025-03-04T14:30:38Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-03-04T13:55:27Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: 'no'
output:
url: images/123.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
license: other
license_name: myself
license_link: LICENSE
---
# 4comic
<Gallery />
## Model description
for myself
## Download model
Weights for this model are available in Safetensors format.
[Download](/c707851/4comic/tree/main) them in the Files & versions tab.
|
abhishekkuber/dccl_correct | abhishekkuber | 2025-03-04T14:28:57Z | 25 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-02-28T13:27:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Correct paper implementation of Domain Confused Contrastive Learning. Eng val 0.74, test 0.73 | Dutch 0.70 (0.92 recall)
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jamesgoncha/fuckorestis5 | jamesgoncha | 2025-03-04T14:28:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-04T14:06:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
iamzafran/qwen-r1-3B-countdown | iamzafran | 2025-03-04T14:28:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-1.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-03-04T14:28:32Z | ---
base_model: Qwen/Qwen2.5-1.5B-Instruct
library_name: transformers
model_name: qwen-r1-3B-countdown
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for qwen-r1-3B-countdown
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="iamzafran/qwen-r1-3B-countdown", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.14.0
- Transformers: 4.46.2
- Pytorch: 2.5.1+cu124
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
quanda-bench-test/0921427-default_ClassDetection | quanda-bench-test | 2025-03-04T14:27:08Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-03-04T12:33:16Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
Juh6973/t5-small-summarizer-cnn-dailymail | Juh6973 | 2025-03-04T14:26:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-03-04T14:26:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
robiulawaldev/8120a111-87b1-4f69-9c93-386b26868e6a | robiulawaldev | 2025-03-04T14:23:50Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:oopsung/llama2-7b-n-ox-test-v1",
"base_model:adapter:oopsung/llama2-7b-n-ox-test-v1",
"region:us"
] | null | 2025-03-04T14:23:33Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: oopsung/llama2-7b-n-ox-test-v1
model-index:
- name: robiulawaldev/8120a111-87b1-4f69-9c93-386b26868e6a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robiulawaldev/8120a111-87b1-4f69-9c93-386b26868e6a
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
zisisbatzos/3SFTs_llama3.2-3B | zisisbatzos | 2025-03-04T14:23:08Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-04T14:18:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
irishprancer/9a302f86-7c58-47dd-9ddc-9d5784c8b9fc | irishprancer | 2025-03-04T14:20:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-04T10:50:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hinablue/SDXL_WAILL | hinablue | 2025-03-04T14:20:31Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-03-04T05:47:08Z | ---
license: other
license_name: fair-ai-public-license-1.0-sd
license_link: https://freedevproject.org/fdpl-1.0/
---
# Model Card
Merge with waiNSFWIllustrious_v110 for testing.
## Model Details
[waiNSFWIllustrious_v110](https://civitai.com/models/827184/wai-nsfw-illustrious-sdxl)
### Model Description
```
waill 0.6 + wai 0.4 => merged
merged 0.6 + 0.4(0.5(waill 0.6 + wai 0.4, cosine A + cosine B)) => merged_plus_cosineAB
``` |
Thiraput01/PhayatunedBERT-v5-finetuned | Thiraput01 | 2025-03-04T14:19:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-04T13:51:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Malpx/Discordbot-DeepSeek-R1-Distill-Llama-8B | Malpx | 2025-03-04T14:18:40Z | 29 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"deepseek",
"unsloth",
"llama-3",
"meta",
"conversational",
"en",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:finetune:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-03T18:34:20Z | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
language:
- en
license: llama3.1
library_name: transformers
tags:
- deepseek
- unsloth
- transformers
- llama
- llama-3
- meta
---
## ***See [our collection](https://huggingface.co/collections/unsloth/deepseek-r1-all-versions-678e1c48f5d2fce87892ace5) for versions of Deepseek-R1 including GGUF and original formats.***
# Finetune LLMs 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.1 (8B) here: https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/Discord%20button.png" width="200"/>](https://discord.gg/unsloth)
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
## ✨ Finetune for Free
All notebooks are **beginner friendly**! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Unsloth supports | Free Notebooks | Performance | Memory use |
|-----------------|--------------------------------------------------------------------------------------------------------------------------|-------------|----------|
| **Llama-3.2 (3B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) | 2.4x faster | 58% less |
| **Llama-3.2 (11B vision)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(11B)-Vision.ipynb) | 2x faster | 60% less |
| **Qwen2 VL (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2_VL_(7B)-Vision.ipynb) | 1.8x faster | 60% less |
| **Qwen2.5 (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Qwen2.5_(7B)-Alpaca.ipynb) | 2x faster | 60% less |
| **Llama-3.1 (8B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.1_(8B)-Alpaca.ipynb) | 2.4x faster | 58% less |
| **Phi-3.5 (mini)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Phi_3.5_Mini-Conversational.ipynb) | 2x faster | 50% less |
| **Gemma 2 (9B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Gemma2_(9B)-Alpaca.ipynb) | 2.4x faster | 58% less |
| **Mistral (7B)** | [▶️ Start on Colab](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_v0.3_(7B)-Conversational.ipynb) | 2.2x faster | 62% less |
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/refs/heads/main/images/documentation%20green%20button.png" width="200"/>](https://docs.unsloth.ai)
- This [Llama 3.2 conversational notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Llama3.2_(1B_and_3B)-Conversational.ipynb) is useful for ShareGPT ChatML / Vicuna templates.
- This [text completion notebook](https://colab.research.google.com/github/unslothai/notebooks/blob/main/nb/Mistral_(7B)-Text_Completion.ipynb) is for raw text. This [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk-hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- \* Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
## Special Thanks
A huge thank you to the DeepSeek team for creating and releasing these models.
## 1. Introduction
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1.
DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning.
With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors.
However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance,
we introduce DeepSeek-R1, which incorporates cold-start data before RL.
DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.
To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models.
<p align="center">
<img width="80%" src="figures/benchmark.jpg">
</p>
## 2. Model Summary
---
**Post-Training: Large-Scale Reinforcement Learning on the Base Model**
- We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area.
- We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities.
We believe the pipeline will benefit the industry by creating better models.
---
**Distillation: Smaller Models Can Be Powerful Too**
- We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future.
- Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community.
## 3. Model Downloads
### DeepSeek-R1 Models
<div align="center">
| **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
| :------------: | :------------: | :------------: | :------------: | :------------: |
| DeepSeek-R1-Zero | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero) |
| DeepSeek-R1 | 671B | 37B | 128K | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1) |
</div>
DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base.
For more details regrading the model architecture, please refer to [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repository.
### DeepSeek-R1-Distill Models
<div align="center">
| **Model** | **Base Model** | **Download** |
| :------------: | :------------: | :------------: |
| DeepSeek-R1-Distill-Qwen-1.5B | [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) |
| DeepSeek-R1-Distill-Qwen-7B | [Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) |
| DeepSeek-R1-Distill-Llama-8B | [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) |
| DeepSeek-R1-Distill-Qwen-14B | [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) |
|DeepSeek-R1-Distill-Qwen-32B | [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) |
| DeepSeek-R1-Distill-Llama-70B | [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct) | [🤗 HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B) |
</div>
DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1.
We slightly change their configs and tokenizers. Please use our setting to run these models.
## 4. Evaluation Results
### DeepSeek-R1-Evaluation
For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1.
<div align="center">
| Category | Benchmark (Metric) | Claude-3.5-Sonnet-1022 | GPT-4o 0513 | DeepSeek V3 | OpenAI o1-mini | OpenAI o1-1217 | DeepSeek R1 |
|----------|-------------------|----------------------|------------|--------------|----------------|------------|--------------|
| | Architecture | - | - | MoE | - | - | MoE |
| | # Activated Params | - | - | 37B | - | - | 37B |
| | # Total Params | - | - | 671B | - | - | 671B |
| English | MMLU (Pass@1) | 88.3 | 87.2 | 88.5 | 85.2 | **91.8** | 90.8 |
| | MMLU-Redux (EM) | 88.9 | 88.0 | 89.1 | 86.7 | - | **92.9** |
| | MMLU-Pro (EM) | 78.0 | 72.6 | 75.9 | 80.3 | - | **84.0** |
| | DROP (3-shot F1) | 88.3 | 83.7 | 91.6 | 83.9 | 90.2 | **92.2** |
| | IF-Eval (Prompt Strict) | **86.5** | 84.3 | 86.1 | 84.8 | - | 83.3 |
| | GPQA-Diamond (Pass@1) | 65.0 | 49.9 | 59.1 | 60.0 | **75.7** | 71.5 |
| | SimpleQA (Correct) | 28.4 | 38.2 | 24.9 | 7.0 | **47.0** | 30.1 |
| | FRAMES (Acc.) | 72.5 | 80.5 | 73.3 | 76.9 | - | **82.5** |
| | AlpacaEval2.0 (LC-winrate) | 52.0 | 51.1 | 70.0 | 57.8 | - | **87.6** |
| | ArenaHard (GPT-4-1106) | 85.2 | 80.4 | 85.5 | 92.0 | - | **92.3** |
| Code | LiveCodeBench (Pass@1-COT) | 33.8 | 34.2 | - | 53.8 | 63.4 | **65.9** |
| | Codeforces (Percentile) | 20.3 | 23.6 | 58.7 | 93.4 | **96.6** | 96.3 |
| | Codeforces (Rating) | 717 | 759 | 1134 | 1820 | **2061** | 2029 |
| | SWE Verified (Resolved) | **50.8** | 38.8 | 42.0 | 41.6 | 48.9 | 49.2 |
| | Aider-Polyglot (Acc.) | 45.3 | 16.0 | 49.6 | 32.9 | **61.7** | 53.3 |
| Math | AIME 2024 (Pass@1) | 16.0 | 9.3 | 39.2 | 63.6 | 79.2 | **79.8** |
| | MATH-500 (Pass@1) | 78.3 | 74.6 | 90.2 | 90.0 | 96.4 | **97.3** |
| | CNMO 2024 (Pass@1) | 13.1 | 10.8 | 43.2 | 67.6 | - | **78.8** |
| Chinese | CLUEWSC (EM) | 85.4 | 87.9 | 90.9 | 89.9 | - | **92.8** |
| | C-Eval (EM) | 76.7 | 76.0 | 86.5 | 68.9 | - | **91.8** |
| | C-SimpleQA (Correct) | 55.4 | 58.7 | **68.0** | 40.3 | - | 63.7 |
</div>
### Distilled Model Evaluation
<div align="center">
| Model | AIME 2024 pass@1 | AIME 2024 cons@64 | MATH-500 pass@1 | GPQA Diamond pass@1 | LiveCodeBench pass@1 | CodeForces rating |
|------------------------------------------|------------------|-------------------|-----------------|----------------------|----------------------|-------------------|
| GPT-4o-0513 | 9.3 | 13.4 | 74.6 | 49.9 | 32.9 | 759 |
| Claude-3.5-Sonnet-1022 | 16.0 | 26.7 | 78.3 | 65.0 | 38.9 | 717 |
| o1-mini | 63.6 | 80.0 | 90.0 | 60.0 | 53.8 | **1820** |
| QwQ-32B-Preview | 44.0 | 60.0 | 90.6 | 54.5 | 41.9 | 1316 |
| DeepSeek-R1-Distill-Qwen-1.5B | 28.9 | 52.7 | 83.9 | 33.8 | 16.9 | 954 |
| DeepSeek-R1-Distill-Qwen-7B | 55.5 | 83.3 | 92.8 | 49.1 | 37.6 | 1189 |
| DeepSeek-R1-Distill-Qwen-14B | 69.7 | 80.0 | 93.9 | 59.1 | 53.1 | 1481 |
| DeepSeek-R1-Distill-Qwen-32B | **72.6** | 83.3 | 94.3 | 62.1 | 57.2 | 1691 |
| DeepSeek-R1-Distill-Llama-8B | 50.4 | 80.0 | 89.1 | 49.0 | 39.6 | 1205 |
| DeepSeek-R1-Distill-Llama-70B | 70.0 | **86.7** | **94.5** | **65.2** | **57.5** | 1633 |
</div>
## 5. Chat Website & API Platform
You can chat with DeepSeek-R1 on DeepSeek's official website: [chat.deepseek.com](https://chat.deepseek.com), and switch on the button "DeepThink"
We also provide OpenAI-Compatible API at DeepSeek Platform: [platform.deepseek.com](https://platform.deepseek.com/)
## 6. How to Run Locally
### DeepSeek-R1 Models
Please visit [DeepSeek-V3](https://github.com/deepseek-ai/DeepSeek-V3) repo for more information about running DeepSeek-R1 locally.
### DeepSeek-R1-Distill Models
DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models.
For instance, you can easily start a service using [vLLM](https://github.com/vllm-project/vllm):
```shell
vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
```
**NOTE: We recommend setting an appropriate temperature (between 0.5 and 0.7) when running these models, otherwise you may encounter issues with endless repetition or incoherent output.**
## 7. License
This code repository and the model weights are licensed under the [MIT License](https://github.com/deepseek-ai/DeepSeek-R1/blob/main/LICENSE).
DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that:
- DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from [Qwen-2.5 series](https://github.com/QwenLM/Qwen2.5), which are originally licensed under [Apache 2.0 License](https://huggingface.co/Qwen/Qwen2.5-1.5B/blob/main/LICENSE), and now finetuned with 800k samples curated with DeepSeek-R1.
- DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under [llama3.1 license](https://huggingface.co/meta-llama/Llama-3.1-8B/blob/main/LICENSE).
- DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under [llama3.3 license](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct/blob/main/LICENSE).
## 8. Citation
```
```
## 9. Contact
If you have any questions, please raise an issue or contact us at [[email protected]]([email protected]). |
baby-dev/40c5941f-7b67-40fe-a7bf-d81788a1caa6 | baby-dev | 2025-03-04T14:18:17Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:oopsung/llama2-7b-n-ox-test-v1",
"base_model:adapter:oopsung/llama2-7b-n-ox-test-v1",
"region:us"
] | null | 2025-03-04T14:18:00Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: oopsung/llama2-7b-n-ox-test-v1
model-index:
- name: baby-dev/40c5941f-7b67-40fe-a7bf-d81788a1caa6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baby-dev/40c5941f-7b67-40fe-a7bf-d81788a1caa6
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9530
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
silviasapora/gemma-7b-silvia-basic-5e-5-05-vsh2p6 | silviasapora | 2025-03-04T14:17:03Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"orpo",
"conversational",
"dataset:argilla/dpo-mix-7k",
"arxiv:2403.07691",
"base_model:google/gemma-7b",
"base_model:finetune:google/gemma-7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-04T11:38:01Z | ---
base_model: google/gemma-7b
datasets:
- argilla/dpo-mix-7k
library_name: transformers
model_name: google/gemma-7b
tags:
- generated_from_trainer
- alignment-handbook
- trl
- orpo
licence: license
---
# Model Card for google/gemma-7b
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the [['argilla/dpo-mix-7k']](https://huggingface.co/datasets/['argilla/dpo-mix-7k']) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="silviasapora/gemma-7b-silvia-basic-5e-5-05-vsh2p6", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/silvias/huggingface/runs/vzr457aw)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.13.0
- Transformers: 4.48.1
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Oldy2008/Alice-Hub | Oldy2008 | 2025-03-04T14:17:00Z | 0 | 0 | null | [
"code",
"license:apache-2.0",
"region:us"
] | null | 2025-03-04T08:57:10Z | ---
license: apache-2.0
tags:
- code
--- |
silviasapora/gemma-7b-silvia-basic-5e-5-05-vsh2p7 | silviasapora | 2025-03-04T14:17:00Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"orpo",
"conversational",
"dataset:argilla/dpo-mix-7k",
"arxiv:2403.07691",
"base_model:google/gemma-7b",
"base_model:finetune:google/gemma-7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-04T11:38:01Z | ---
base_model: google/gemma-7b
datasets:
- argilla/dpo-mix-7k
library_name: transformers
model_name: google/gemma-7b
tags:
- generated_from_trainer
- alignment-handbook
- trl
- orpo
licence: license
---
# Model Card for google/gemma-7b
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the [['argilla/dpo-mix-7k']](https://huggingface.co/datasets/['argilla/dpo-mix-7k']) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="silviasapora/gemma-7b-silvia-basic-5e-5-05-vsh2p7", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/silvias/huggingface/runs/byqulp8k)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.13.0
- Transformers: 4.48.1
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
jmalejandrob79/cndnlsldd | jmalejandrob79 | 2025-03-04T14:16:26Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2024-09-22T20:31:53Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: cndnlsldd
---
# Cndnlsldd
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `cndnlsldd` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('jmalejandrob79/cndnlsldd', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
silviasapora/gemma-7b-silvia-basic-5e-5-05-vsh2p4 | silviasapora | 2025-03-04T14:16:19Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"orpo",
"conversational",
"dataset:argilla/dpo-mix-7k",
"arxiv:2403.07691",
"base_model:google/gemma-7b",
"base_model:finetune:google/gemma-7b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-04T11:38:01Z | ---
base_model: google/gemma-7b
datasets:
- argilla/dpo-mix-7k
library_name: transformers
model_name: google/gemma-7b
tags:
- generated_from_trainer
- alignment-handbook
- trl
- orpo
licence: license
---
# Model Card for google/gemma-7b
This model is a fine-tuned version of [google/gemma-7b](https://huggingface.co/google/gemma-7b) on the [['argilla/dpo-mix-7k']](https://huggingface.co/datasets/['argilla/dpo-mix-7k']) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="silviasapora/gemma-7b-silvia-basic-5e-5-05-vsh2p4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/silvias/huggingface/runs/j4uxb5dk)
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.13.0
- Transformers: 4.48.1
- Pytorch: 2.5.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
pandybeii/flat-wD | pandybeii | 2025-03-04T14:15:44Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-03-04T02:19:59Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: LICENSE
---
|
Dahonk/Test | Dahonk | 2025-03-04T14:07:22Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-04T14:07:22Z | ---
license: apache-2.0
---
|
ReadyArt/Qwen2.5-1.5B-Instruct_EXL2_5.0bpw_H8 | ReadyArt | 2025-03-04T14:06:41Z | 0 | 0 | transformers | [
"transformers",
"qwen2",
"text-generation",
"chat",
"conversational",
"en",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:quantized:Qwen/Qwen2.5-1.5B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"5-bit",
"exl2",
"region:us"
] | text-generation | 2025-03-04T14:05:55Z | ---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-1.5B
tags:
- chat
library_name: transformers
---
# Qwen2.5-1.5B-Instruct
## Introduction
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 1.5B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 1.54B
- Number of Paramaters (Non-Embedding): 1.31B
- Number of Layers: 28
- Number of Attention Heads (GQA): 12 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
For more details, please refer to our [blog](https://qwenlm.github.io/blog/qwen2.5/), [GitHub](https://github.com/QwenLM/Qwen2.5), and [Documentation](https://qwen.readthedocs.io/en/latest/).
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-1.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
```
## Evaluation & Performance
Detailed evaluation results are reported in this [📑 blog](https://qwenlm.github.io/blog/qwen2.5/).
For requirements on GPU memory and the respective throughput, see results [here](https://qwen.readthedocs.io/en/latest/benchmark/speed_benchmark.html).
## Citation
If you find our work helpful, feel free to give us a cite.
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
``` |
irishprancer/3a211065-d806-4534-80ac-8f92e5b7e6cf | irishprancer | 2025-03-04T14:06:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-04T11:43:27Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
texanrangee/560b1706-54b0-4216-b590-1f85bfd776e9 | texanrangee | 2025-03-04T14:06:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-04T13:12:12Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Alphatao/dc7401e9-f365-4f04-9a04-574ab402fcd6 | Alphatao | 2025-03-04T13:56:41Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v0.6",
"license:apache-2.0",
"region:us"
] | null | 2025-03-04T13:11:50Z | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6
tags:
- axolotl
- generated_from_trainer
model-index:
- name: dc7401e9-f365-4f04-9a04-574ab402fcd6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 580e2ed7cf1a385c_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/580e2ed7cf1a385c_train_data.json
type:
field_input: context
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
device_map:
? ''
: 0,1,2,4
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 33
eval_table_size: null
flash_attention: true
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: false
hub_model_id: Alphatao/dc7401e9-f365-4f04-9a04-574ab402fcd6
hub_repo: null
hub_strategy: null
hub_token: null
learning_rate: 0.0002
load_best_model_at_end: true
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.3
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 1200.0
micro_batch_size: 4
mlflow_experiment_name: /tmp/580e2ed7cf1a385c_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 33
sequence_len: 2048
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.04
wandb_entity: null
wandb_mode: online
wandb_name: cd6f5a26-96e9-4de4-b508-b4e9fb72732f
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: cd6f5a26-96e9-4de4-b508-b4e9fb72732f
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# dc7401e9-f365-4f04-9a04-574ab402fcd6
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 571
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9155 | 0.0035 | 1 | 0.9365 |
| 0.3103 | 0.1156 | 33 | 0.3041 |
| 0.2606 | 0.2312 | 66 | 0.2477 |
| 0.2239 | 0.3468 | 99 | 0.2264 |
| 0.2124 | 0.4623 | 132 | 0.2110 |
| 0.1808 | 0.5779 | 165 | 0.2038 |
| 0.2123 | 0.6935 | 198 | 0.1944 |
| 0.1903 | 0.8091 | 231 | 0.1890 |
| 0.2142 | 0.9247 | 264 | 0.1861 |
| 0.1475 | 1.0403 | 297 | 0.1806 |
| 0.1483 | 1.1559 | 330 | 0.1786 |
| 0.1769 | 1.2715 | 363 | 0.1746 |
| 0.1627 | 1.3870 | 396 | 0.1733 |
| 0.1687 | 1.5026 | 429 | 0.1707 |
| 0.1749 | 1.6182 | 462 | 0.1692 |
| 0.1529 | 1.7338 | 495 | 0.1685 |
| 0.1725 | 1.8494 | 528 | 0.1679 |
| 0.1558 | 1.9650 | 561 | 0.1677 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
baby-dev/db701031-4801-41fe-98f5-9a99123c9355 | baby-dev | 2025-03-04T13:55:30Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:oopsung/llama2-7b-n-ox-test-v1",
"base_model:adapter:oopsung/llama2-7b-n-ox-test-v1",
"region:us"
] | null | 2025-03-04T13:55:16Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: oopsung/llama2-7b-n-ox-test-v1
model-index:
- name: baby-dev/db701031-4801-41fe-98f5-9a99123c9355
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# baby-dev/db701031-4801-41fe-98f5-9a99123c9355
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9525
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MiniEtagka/tolstovkadenis | MiniEtagka | 2025-03-04T13:55:17Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-03-04T13:36:47Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: tolstovkadenis
---
# Tolstovkadenis
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `tolstovkadenis` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('MiniEtagka/tolstovkadenis', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
LucaZilli/arctic-s-phrases-only-v0 | LucaZilli | 2025-03-04T13:54:27Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:48157",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:Snowflake/snowflake-arctic-embed-s",
"base_model:finetune:Snowflake/snowflake-arctic-embed-s",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-03-04T13:54:17Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:48157
- loss:CosineSimilarityLoss
base_model: Snowflake/snowflake-arctic-embed-s
widget:
- source_sentence: Fornitori di materiali di alta qualità per produzione manifatturiera
sentences:
- FRIUL FILIERE SPA ||~~|| FRIUL FILIERE SPA è specializzata in linee di estrusione
e attrezzature per la produzione di profili, tubi e vari prodotti avanzati in
polimero. biocompositi
- FRER SRL ||~~|| FRER SRL si specializza in strumenti di misura elettrica ad alta
affidabilità, tra cui analizzatori di rete multifunzione, trasformatori di corrente
e moduli di monitoraggio. analizzatore di rete trifase, isolato, con lettura thd
- MONT.EL APPARECCHIATURE ELETTROELETTRONICHE SRL ||~~|| MONT.EL APPARECCHIATURE
ELETTROELETTRONICHE SRL specializes in custom electrical wiring and electronic
equipment for various industrial applications. temperature probe
- source_sentence: come strutturare listino prezzi multilivello
sentences:
- GIOVE SRL ||~~|| GIOVE SRL offre consulenza per le imprese, concentrandosi su
incentivi finanziari, produzione sostenibile, valutazione dei brevetti e supporto
all'internazionalizzazione. finanza agevolata
- SELEZIONE MANIERI SAS DI MARCO MANIERI ||~~|| SELEZIONE MANIERI SAS offers comprehensive
e-commerce solutions, including legal information, payment methods, dropshipping,
and responsive design for online businesses. price comparators
- SWEET LEGAL TECH SRL STARTUP COSTITUITA A NORMA DELL'ART. 4 COMMA 10 BIS DEL DECRETO
LEGGE 24 GENNAIO 2015 N. 3 ||~~|| SWEET LEGAL TECH SRL offers legal tech consulting
and software solutions for AI, contract management, digital compliance, and legal
tech indexing. contract management (software)
- source_sentence: traduttori technici cinese
sentences:
- noleggio-stampanti.com | noleggio multifunzione Udine | noleggio stampanti Udine
||~~|| Noleggio-stampanti.com offers high-quality rental services for printers,
multifunction devices, and photocopiers, focusing on cost savings and efficient
print management. large format plotter
- ISFCERT SRL ||~~|| ISFCERT SRL è un organismo di certificazione che offre la certificazione
ISO 25639:2008 e certificati di audit secondo lo standard UFI, accreditato da
Accredia. certification body
- SVETLANA MIRONOVA ||~~|| SVETLANA MIRONOVA offre consulenza commerciale, interpretariato,
assistenza, ricerca, selezione, traduzione professionale e prodotti specializzati
come i sistemi di mungitura TDM e le morsettiere Euro. traduzione professionale
- source_sentence: Indicami delle aziende che fresano polietilene ad alta intensità.
mi serve per realizzare delle portate/tamponi per stampare lo scafo di una barca
sentences:
- PASTA FRESCA FRANZI DI CERRI CORRADO & IANI OMBRETTA SNC ||~~|| PASTA FRESCA FRANZI
si specializza in pasta fresca artigianale di alta qualità, comprese varietà con
uova e senza uova, pasta ripiena e salse. pappardelle
- CONFIDICOOP MARCHE SOC COOP ||~~|| CONFIDICOOP MARCHE SOC COOP offre servizi finanziari
per le imprese, tra cui credito rapido, garanzie bancarie e consulenza per l'accesso
al finanziamento. credito veloce
- EFFEDUE S.R.L. ||~~|| EFFEDUE S.R.L. è specializzata in materiali plastici di
alta qualità e servizi di lavorazione CNC personalizzati per varie applicazioni
industriali. polietilene
- source_sentence: CDMO
sentences:
- C.M.L. SNC DI ZANETTI GIOVANNI & C. ||~~|| C.M.L. SNC specializes in precision
mechanical machining, offering a range of mechanical processing and various types
of machines and systems. mechanical processing
- VTEX ECOMMERCE PLATFORM LIMITED ||~~|| VTEX ECOMMERCE PLATFORM LIMITED offre una
piattaforma di commercio completa basata su cloud per B2B e B2C, con soluzioni
per la gestione degli ordini e marketplace. commercio b2b
- Insight Consulting - Siti web e Digital Marketing ||~~|| Insight Consulting specializes
in digital strategy, enhancing customer engagement, brand awareness, and lead
acquisition through tailored omni-channel solutions and market analysis. lead
acquisition
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-s
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-s](https://huggingface.co/Snowflake/snowflake-arctic-embed-s). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-s](https://huggingface.co/Snowflake/snowflake-arctic-embed-s) <!-- at revision e596f507467533e48a2e17c007f0e1dacc837b33 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("LucaZilli/arctic-s-phrases-only-v0")
# Run inference
sentences = [
'CDMO',
'C.M.L. SNC DI ZANETTI GIOVANNI & C. ||~~|| C.M.L. SNC specializes in precision mechanical machining, offering a range of mechanical processing and various types of machines and systems. mechanical processing',
'Insight Consulting - Siti web e Digital Marketing ||~~|| Insight Consulting specializes in digital strategy, enhancing customer engagement, brand awareness, and lead acquisition through tailored omni-channel solutions and market analysis. lead acquisition',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 48,157 training samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:---------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 14.6 tokens</li><li>max: 46 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 25.64 tokens</li><li>max: 133 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:----------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>pavimentazione industriale antiscivolo certificata</code> | <code>MOLTA SRL ||~~|| MOLTA SRL si specializza in soluzioni di pavimentazione e sottofondo a base di cemento di alta qualità per edifici civili e industriali, garantendo prestazioni superiori e durata. trattamenti superficiali antiolio per pavimenti industriali</code> | <code>0.4</code> |
| <code>monitor arm for dual screens</code> | <code>braccio per monitor</code> | <code>0.6</code> |
| <code>investigatore privato dipendenti</code> | <code>Investigatore Privato ||~~|| Investigatore Privato offre servizi investigativi completi, inclusi indagini private, aziendali e forensi, con un focus su questioni legali, finanziarie e personali. affidamento figli minori</code> | <code>0.6</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 2,352 evaluation samples
* Columns: <code>sentence1</code>, <code>sentence2</code>, and <code>score</code>
* Approximate statistics based on the first 1000 samples:
| | sentence1 | sentence2 | score |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 4 tokens</li><li>mean: 17.31 tokens</li><li>max: 48 tokens</li></ul> | <ul><li>min: 13 tokens</li><li>mean: 71.79 tokens</li><li>max: 122 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.62</li><li>max: 1.0</li></ul> |
* Samples:
| sentence1 | sentence2 | score |
|:-------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>CDMO</code> | <code>C.M.L. SNC DI ZANETTI GIOVANNI & C. ||~~|| C.M.L. SNC specializes in precision mechanical machining, offering a range of mechanical processing and various types of machines and systems. mechanical processing</code> | <code>0.4</code> |
| <code>programmatori salesforce</code> | <code>EFFEGIT SRL ||~~|| EFFEGIT SRL è una software house specializzata nello sviluppo web, che offre competenze in C#, Java, Swift, .NET e varie piattaforme. salesforce (piattaforma)</code> | <code>0.6</code> |
| <code>software con intelligenza artificiale per i contratti</code> | <code>BORRONI VALERIA ||~~|| BORRONI VALERIA offre una piattaforma per oggetti smarriti e trovati, con servizi di geolocalizzazione per animali domestici, gioielli, abbigliamento, trasporti e musica. annunci smarrimento</code> | <code>0.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `learning_rate`: 2.0000000000000003e-06
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `load_best_model_at_end`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2.0000000000000003e-06
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | Validation Loss |
|:----------:|:-------:|:-------------:|:---------------:|
| 0.0066 | 10 | 0.1549 | - |
| 0.0133 | 20 | 0.1566 | - |
| 0.0199 | 30 | 0.1482 | - |
| 0.0266 | 40 | 0.135 | 0.0904 |
| 0.0332 | 50 | 0.1406 | - |
| 0.0399 | 60 | 0.1186 | - |
| 0.0465 | 70 | 0.1077 | - |
| 0.0532 | 80 | 0.1068 | 0.0746 |
| 0.0598 | 90 | 0.0818 | - |
| 0.0664 | 100 | 0.0841 | - |
| 0.0731 | 110 | 0.0768 | - |
| 0.0797 | 120 | 0.0737 | 0.0721 |
| 0.0864 | 130 | 0.0717 | - |
| 0.0930 | 140 | 0.0632 | - |
| 0.0997 | 150 | 0.0585 | - |
| 0.1063 | 160 | 0.0633 | 0.0757 |
| 0.1130 | 170 | 0.0567 | - |
| 0.1196 | 180 | 0.0633 | - |
| 0.1262 | 190 | 0.0584 | - |
| 0.1329 | 200 | 0.0686 | 0.0744 |
| 0.1395 | 210 | 0.0618 | - |
| 0.1462 | 220 | 0.0585 | - |
| 0.1528 | 230 | 0.0545 | - |
| 0.1595 | 240 | 0.0588 | 0.0721 |
| 0.1661 | 250 | 0.0532 | - |
| 0.1728 | 260 | 0.0608 | - |
| 0.1794 | 270 | 0.054 | - |
| 0.1860 | 280 | 0.059 | 0.0697 |
| 0.1927 | 290 | 0.0513 | - |
| 0.1993 | 300 | 0.0603 | - |
| 0.2060 | 310 | 0.0538 | - |
| 0.2126 | 320 | 0.0565 | 0.0686 |
| 0.2193 | 330 | 0.0515 | - |
| 0.2259 | 340 | 0.0565 | - |
| 0.2326 | 350 | 0.0579 | - |
| 0.2392 | 360 | 0.0504 | 0.0672 |
| 0.2458 | 370 | 0.0529 | - |
| 0.2525 | 380 | 0.0541 | - |
| 0.2591 | 390 | 0.0552 | - |
| 0.2658 | 400 | 0.0556 | 0.0669 |
| 0.2724 | 410 | 0.0561 | - |
| 0.2791 | 420 | 0.0629 | - |
| 0.2857 | 430 | 0.05 | - |
| 0.2924 | 440 | 0.0609 | 0.0659 |
| 0.2990 | 450 | 0.0539 | - |
| 0.3056 | 460 | 0.0556 | - |
| 0.3123 | 470 | 0.0516 | - |
| 0.3189 | 480 | 0.0456 | 0.0651 |
| 0.3256 | 490 | 0.0485 | - |
| 0.3322 | 500 | 0.0504 | - |
| 0.3389 | 510 | 0.0577 | - |
| 0.3455 | 520 | 0.0538 | 0.0647 |
| 0.3522 | 530 | 0.0458 | - |
| 0.3588 | 540 | 0.0496 | - |
| 0.3654 | 550 | 0.0486 | - |
| 0.3721 | 560 | 0.0536 | 0.0645 |
| 0.3787 | 570 | 0.0501 | - |
| 0.3854 | 580 | 0.0519 | - |
| 0.3920 | 590 | 0.0523 | - |
| 0.3987 | 600 | 0.0456 | 0.0639 |
| 0.4053 | 610 | 0.0561 | - |
| 0.4120 | 620 | 0.0534 | - |
| 0.4186 | 630 | 0.0546 | - |
| 0.4252 | 640 | 0.0531 | 0.0637 |
| 0.4319 | 650 | 0.0443 | - |
| 0.4385 | 660 | 0.0522 | - |
| 0.4452 | 670 | 0.0456 | - |
| 0.4518 | 680 | 0.049 | 0.0635 |
| 0.4585 | 690 | 0.0488 | - |
| 0.4651 | 700 | 0.0523 | - |
| 0.4718 | 710 | 0.0487 | - |
| 0.4784 | 720 | 0.0515 | 0.0632 |
| 0.4850 | 730 | 0.0453 | - |
| 0.4917 | 740 | 0.0511 | - |
| 0.4983 | 750 | 0.0429 | - |
| 0.5050 | 760 | 0.0409 | 0.0631 |
| 0.5116 | 770 | 0.0534 | - |
| 0.5183 | 780 | 0.0485 | - |
| 0.5249 | 790 | 0.0527 | - |
| 0.5316 | 800 | 0.0475 | 0.0630 |
| 0.5382 | 810 | 0.0512 | - |
| 0.5449 | 820 | 0.0439 | - |
| 0.5515 | 830 | 0.042 | - |
| 0.5581 | 840 | 0.0499 | 0.0628 |
| 0.5648 | 850 | 0.0431 | - |
| 0.5714 | 860 | 0.0541 | - |
| 0.5781 | 870 | 0.045 | - |
| 0.5847 | 880 | 0.0495 | 0.0627 |
| 0.5914 | 890 | 0.0531 | - |
| 0.5980 | 900 | 0.0478 | - |
| 0.6047 | 910 | 0.0547 | - |
| **0.6113** | **920** | **0.0474** | **0.0626** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.2.2
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
MeiKing111/global_26 | MeiKing111 | 2025-03-04T13:54:07Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-04T04:47:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lityStabi/filemonade1 | lityStabi | 2025-03-04T13:49:56Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-03-04T13:36:17Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Fïlëmönädë
---
# Filemonade1
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Fïlëmönädë` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('lityStabi/filemonade1', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
ISEGURA/mdeberta-v3-base-200-bioautex | ISEGURA | 2025-03-04T13:49:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-04T13:49:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
wafureshugen/aiua_llama_3-2_3b_t1 | wafureshugen | 2025-03-04T13:49:33Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-04T13:03:13Z | ---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** wafureshugen
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Gameselo/french-multilingual-e5-large-instruct | Gameselo | 2025-03-04T13:49:13Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"passage-retrieval",
"sentence-similarity",
"pruned",
"fr",
"base_model:intfloat/multilingual-e5-large-instruct",
"base_model:quantized:intfloat/multilingual-e5-large-instruct",
"license:mit",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-03-04T13:49:02Z |
---
pipeline_tag: sentence-similarity
language: fr
license: mit
tags:
- passage-retrieval
- sentence-similarity
- pruned
library_name: sentence-transformers
base_model: intfloat/multilingual-e5-large-instruct
base_model_relation: quantized
---
# 🇫🇷 french-multilingual-e5-large-instruct
This model is a 38.9% smaller version of [intfloat/multilingual-e5-large-instruct](https://huggingface.co/intfloat/multilingual-e5-large-instruct)
for the French language, created using the [mtem-pruner](https://huggingface.co/spaces/antoinelouis/mtem-pruner) space.
This pruned model should perform similarly to the original model for French language tasks with a much smaller
memory footprint. However, it may not perform well for other languages present in the original multilingual model as tokens not
commonly used in French were removed from the original multilingual model's vocabulary.
## Usage
You can use this model with the Transformers library:
```python
from transformers import AutoModel, AutoTokenizer
model_name = "Gameselo/french-multilingual-e5-large-instruct"
model = AutoModel.from_pretrained(model_name, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True, use_fast=True)
```
Or with the sentence-transformers library:
```python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("Gameselo/french-multilingual-e5-large-instruct")
```
**Credits**: cc [@antoinelouis](https://huggingface.co/antoinelouis)
|
Se-Jin/kot5-summarization | Se-Jin | 2025-03-04T13:48:16Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2025-03-04T02:40:50Z | ---
library_name: transformers
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: kot5-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kot5-summarization
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5538
- Rouge1: 21.7886
- Rouge2: 5.1324
- Rougel: 21.4998
- Rougelsum: 21.4897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|
| 2.9193 | 1.0 | 2000 | 1.5958 | 20.4915 | 4.7441 | 20.3127 | 20.2598 |
| 1.9504 | 2.0 | 4000 | 1.5538 | 21.7886 | 5.1324 | 21.4998 | 21.4897 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
kensvin/emotion_classification | kensvin | 2025-03-04T13:46:55Z | 90 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-09-13T12:02:04Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: emotion_classification
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.60625
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# emotion_classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2024
- Accuracy: 0.6062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 10 | 1.3600 | 0.4938 |
| No log | 2.0 | 20 | 1.2908 | 0.4938 |
| No log | 3.0 | 30 | 1.2799 | 0.5 |
| No log | 4.0 | 40 | 1.2110 | 0.5312 |
| No log | 5.0 | 50 | 1.2178 | 0.5188 |
| No log | 6.0 | 60 | 1.2189 | 0.5188 |
| No log | 7.0 | 70 | 1.2566 | 0.5375 |
| No log | 8.0 | 80 | 1.1838 | 0.5687 |
| No log | 9.0 | 90 | 1.2730 | 0.55 |
| No log | 10.0 | 100 | 1.2329 | 0.575 |
| No log | 11.0 | 110 | 1.2224 | 0.5563 |
| No log | 12.0 | 120 | 1.2729 | 0.5563 |
| No log | 13.0 | 130 | 1.2678 | 0.5687 |
| No log | 14.0 | 140 | 1.2423 | 0.5687 |
| No log | 15.0 | 150 | 1.1704 | 0.6312 |
| No log | 16.0 | 160 | 1.2925 | 0.5625 |
| No log | 17.0 | 170 | 1.3557 | 0.5312 |
| No log | 18.0 | 180 | 1.2951 | 0.5687 |
| No log | 19.0 | 190 | 1.2594 | 0.5625 |
| No log | 20.0 | 200 | 1.2463 | 0.5687 |
### Framework versions
- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3
|
raheelchandio55/AI-DrivenContenGenerationPlatform | raheelchandio55 | 2025-03-04T13:46:13Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-04T13:46:13Z | ---
license: apache-2.0
---
|
texanrangee/618c4e42-6bc7-4c37-a7de-4494f9fc7577 | texanrangee | 2025-03-04T13:42:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-04T09:19:00Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kim-12322/deepseek-public-health | kim-12322 | 2025-03-04T13:40:57Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"dataset:sambanankhu/public-health-QA-handouts-instruct-Llama-2k",
"arxiv:1910.09700",
"base_model:deepseek-ai/DeepSeek-R1",
"base_model:quantized:deepseek-ai/DeepSeek-R1",
"license:mit",
"8-bit",
"gptq",
"region:us"
] | null | 2025-03-04T13:05:04Z | ---
license: mit
datasets:
- sambanankhu/public-health-QA-handouts-instruct-Llama-2k
base_model:
- deepseek-ai/DeepSeek-R1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
rayonlabs/Qwen2_5-7B-Instruct-orca_mini_uncensored-62308494-ba19-4e1f-8a78-afd21d23a45d | rayonlabs | 2025-03-04T13:39:29Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-7B-Instruct",
"region:us"
] | null | 2025-03-04T13:39:29Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: Qwen/Qwen2.5-7B-Instruct
model-index:
- name: nathanialhunt2000/fdfcbbc4-4258-44f1-971c-17b3cb6d010c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nathanialhunt2000/fdfcbbc4-4258-44f1-971c-17b3cb6d010c
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
irishprancer/737b4495-b4d9-490b-8289-8ba6c91a6097 | irishprancer | 2025-03-04T13:35:53Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-04T10:04:35Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
WbjuSrceu/Qwen2-0.5B-GRPO-test | WbjuSrceu | 2025-03-04T13:31:12Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"dataset:WbjuSrceu/jfgfh",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-02-13T08:11:11Z | ---
base_model: Qwen/Qwen2-0.5B-Instruct
datasets: WbjuSrceu/jfgfh
library_name: transformers
model_name: Qwen2-0.5B-GRPO-test
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2-0.5B-GRPO-test
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [WbjuSrceu/jfgfh](https://huggingface.co/datasets/WbjuSrceu/jfgfh) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="WbjuSrceu/Qwen2-0.5B-GRPO-test", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
EdBerg/glean_Llama-3.2-3B-Instruct_Baha_1 | EdBerg | 2025-03-04T13:31:00Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.2-3B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-03-04T12:51:19Z | ---
base_model: meta-llama/Llama-3.2-3B-Instruct
library_name: transformers
model_name: glean_Llama-3.2-3B-Instruct_Baha_1
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for glean_Llama-3.2-3B-Instruct_Baha_1
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="EdBerg/glean_Llama-3.2-3B-Instruct_Baha_1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/harpermia882/huggingface/runs/yhiaavyg)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.0
- Transformers: 4.48.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
tjohn327/scion-multilingual-e5-small | tjohn327 | 2025-03-04T13:30:31Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:23040",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:intfloat/multilingual-e5-small",
"base_model:finetune:intfloat/multilingual-e5-small",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2025-03-04T10:46:30Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:23040
- loss:MultipleNegativesRankingLoss
base_model: intfloat/multilingual-e5-small
widget:
- source_sentence: 'query: How do ingress and egress interface pairs relate to the
PATH field in SCION packets?'
sentences:
- 'passage:
<citation> Laurent Chuat et al.. *The Complete Guide to SCION. From Design Principles
to Formal Verification*. Springer International Publishing AG, 2022. </citation>
<type> book </type>
<page> 268 </page>
<content>
10 Extensions for the Data Plane
send traffic. However, even using multiple versions simultaneously does not
provide more bandwidth, as all versions are mapped to the same underlying
reservation ID in the probabilistic traffic monitor (see § 10.2.4.11).
The initiator of an EER is not the only entity that could be interested in ad-
justing the reserved bandwidth. An AS on the path may also wish to reduce
an EER’s bandwidth, e.g., if it receives an increasing number of contending
requests. As in the setup procedure, during a renewal request all on-path ASes
can specify the amount of bandwidth they are willing to grant, enabling ASes
to quickly adapt to changes in demand without interrupting service over exist-
ing reservations.
SegRs. Although there is a less frequent need to renew SegRs, due to their
longer lifetimes, COLIBRI’s design ensures that EERs are not affected by a
version change of their underlying SegR. In contrast to EERs, only a single
version of a SegR can exist at any time and a pending version obtained through
a renewal request must be activated explicitly using a separate request. Making
this switch explicit allows ASes to precisely control the time to change to a
new
version and ensure that no over-allocation with EERs can occur.
10.2.4.4 Packet Format and Header Fields
COLIBRI is implemented as a separate path type in the SCION architecture.
Abstractly, a COLIBRI packet traversing AS0–ASℓ has the following format:
PACKET “ pPATH } RESINFO } EERI NFO 8 } Ts }
V0 } . . .} Vℓ } Payloadq, (10.5a)
PATH “ ppIn0, Eg0q } . . .} pInℓ, Egℓqq, (10.5b)
RESINFO “ pSrcAS } ResId } Bw } ExpT } Verq, (10.5c)
EERI NFO “ pSrcHost } DstHostq, (10.5d)
where Vi denotes the hop validation field (HVF) of ASi, which authenticates
parts of the packet header and will be explained in detail in § 10.2.4.6; P ATH
is a list of ingress–egress interface pairs; SrcAS is the source AS; SrcHost
and DstHost are the end-host addresses within the SCION address header; Bw,
ExpT, and Ver denote the reservation bandwidth, expiration time, and version,
respectively; and Ts is a high-precision packet timestamp relative to ExpT and
uniquely identifies the packet for the particular source.
This packet format is used for all COLIBRI control- and data-plane traffic.
In the case of SegRs, AS0–ASℓ denote the ASes that constitute the particular
segment, for EERs they correspond to the ASes on the end-to-end path.
8The EERI NFO field is only used for data-plane packets on EERs.
248
</content>'
- 'passage:
<citation> Antonio Battipaglia et al.. "Evaluation of SCION for User-driven Path
Control: a Usability Study." *Proceedings of the SC ''23 Workshops of The International
Conference on High Performance Computing, Network, Storage, and Analysis*, 2023.
</citation>
<type> research paper </type>
<page> 1 </page>
<content>
Evaluation of SCION for User-driven Path Control: a Usability
Study
Antonio Battipaglia
[email protected]
Politecnico di Torino
Turin, Italy
Leonardo Boldrini
[email protected]
University of Amsterdam
Amsterdam, The Netherlands
Ralph Koning
[email protected]
SIDN Labs
Arnhem, The Netherlands
Paola Grosso
[email protected]
University of Amsterdam
Amsterdam, The Netherlands
ABSTRACT
The UPIN (User-driven Path verification and control in Inter-domain
Networks) project aims to implement a way for users of a network
to control how their data is traversing it. In this paper we investi-
gate the possibilities and limitations of SCION for user-driven path
control. Exploring several aspects of the performance of a SCION
network allows us to define the most efficient path to assign to a
user, following specific requests. We extensively analyze multiple
paths, specifically focusing on latency, bandwidth and data loss,
in SCIONLab, an experimental testbed and implementation of a
SCION network. We gather data on these paths and store it in a
database, that we then query to select the best path to give to a user
to reach a destination, following their request on performance or
devices to exclude for geographical or sovereignty reasons. Results
indicate our software is a viable option to offer users many paths to
choose from, following a series of requests, and therefore perform
user-driven path control in a SCION network.
ACM Reference Format:
Antonio Battipaglia, Leonardo Boldrini, Ralph Koning, and Paola Grosso.
2023. Evaluation of SCION for User-driven Path Control: a Usability Study. In
Workshops of The International Conference on High Performance Computing,
Network, Storage, and Analysis (SC-W 2023), November 12–17, 2023, Denver,
CO, USA. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/
3624062.3624592
1 INTRODUCTION
Citizens and governments depend on digital technologies that are
severely entangled in the main structure of society [5]. These tech-
nologies are built on the traditional Internet architecture and there-
fore inherit some of its limitations, such as the lack user control of
the network, and a consequent erosion of trust [6]. The Responsible
Internet paradigm wants overcome these problems by improving
the Internet transparency, accountability and controllability [6]. The
UPIN (User-driven Path verification and control in Inter-domain
This work is licensed under a Creative Commons Attribution International
4.0 License.
SC-W 2023, November 12–17, 2023, Denver, CO, USA
© 2023 Copyright held by the owner/author(s).
ACM ISBN 979-8-4007-0785-8/23/11.
https://doi.org/10.1145/3624062.3624592
Networks) project, based on the notion of Responsible Internet,
develops a framework for users to control the behaviour of the net-
work [2] while integrating with the current Internet architecture.
However, providing users with a degree of control over network
traffic requires the network architecture itself to be designed differ-
ently from traditional approaches in use today.
SCION [9] is an Internet architecture designed to provide route
control, failure isolation, and explicit trust information for end-
to-end communications to end users. SCION addresses some of
the problems that the Responsible Internet wants to overcome and
provides strong resilience and security properties, as an intrinsic
consequence of good design principles. This is achieved by separat-
ing Autonomous Systems (ASes) into groups of independent routing
sub-planes, called trust domains, which then interconnect to form
complete routes. Trust domains provide natural isolation of routing
failures and manual misconfiguration. More importantly within
our research scope, they give endpoints strong control for both
inbound and outbound traffic, provide meaningful and enforceable
trust, and enable scalable routing updates.
Our research investigates the possibilities and limitations of
relying on a SCION network to provide users with control on how
their traffic is steered through the network. In order to achieve
this user-driver path control, we need to know some properties of
the underlying paths. For example, we analyze how the choice of a
specific path that follows the lowest latency to a desired destination,
as chosen by a user, affects the available bandwidth within a SCION
network. This allows us to examine the impact on performance that
shifting network control from operators to end users has on traffic.
This paper first provides an overview of an existing SCION
network and its capabilities, such as applications that run on it to
show how different paths are affected by latency, bandwidth and
packet loss. We then present our software that leverages on these
applications to build a database that contains extensive information
on paths available in the SCION network we tested. This database
is then queried to provide users with the best possible path they can
choose for reaching a specific destination, based on performance,
geographic placement of devices traversed, and operators that run
them.
The rest of the paper is structured as follows. First, the concepts
of SCION and the UPIN project are explained in section 2 to grasp
the range of this research. Our experimental setup and its capabili-
ties are reviewed in section 3. In section 4 the design considerations
785
</content>'
- 'passage:
<citation> Laurent Chuat et al.. *The Complete Guide to SCION. From Design Principles
to Formal Verification*. Springer International Publishing AG, 2022. </citation>
<type> book </type>
<page> 338 </page>
<content>
13 Deployment and Operation
This chapter presents deployment alternatives and network-operation ap-
proaches of the SCION Internet architecture. Deploying a next-generation
architecture is a challenging task, as it needs to be integrated with and operate
alongside existing infrastructure. In the following, we discuss deployment
approaches of SCION in the real world, supporting both SCION-enabled hosts
and legacy hosts.
We first present considerations around a global SCION deployment. Then
we discuss the stakeholder incentives needed for such a deployment to be suc-
cessful (cf. § 13.1). To this end, different deployment scenarios for ISPs, IXPs,
and end-user domains are outlined.
§13.2 discusses deployment considerations for end hosts, in particular end-
host bootstrapping.
To enable legacy hosts also to benefit from SCION, the SCION–IP Gate-
way (SIG) provides an interface between SCION and the legacy IP world
(cf. § 13.3). Different types of SIG coordination systems have been developed
to facilitate a large-scale deployment of the SIG, including SGRP and SIAM
(cf. § 13.4.1 and § 13.4.2). Furthermore, different SIG deployment scenarios
are discussed.
The Secure Backbone AS (SBAS) enables a partial SCION deployment to
offer secure routing benefits not only to customers of participating ISPs, but
also to hosts all across the legacy Internet, as is explained in § 13.5. To illustrate
the operation of SCION on real networks, the life of a packet is followed in
§13.6.
Required Changes for Different Deployment Scenarios. Table 13.1
shows the required changes for the different technologies that drive deploy-
ment of SCION. The table considers changes that are needed at the leaf AS
(AS which the end host is in), the ISP (the service provider of the leaf AS),
the
Operating System (OS) of the end host, and the application – considering both
the source (src) and the destination (dst).
The table uncovers the large difference in terms of required changes for dif-
ferent technologies, ranging from a full SCION deployment to SBAS. In the
short term, SBAS is a promising approach to make the benefits of SCION
widely available, which in turn will enable SCION to expand its deployment
with increasing use. With the growing availability of SCION at ISPs, the other
deployment options will be supported, resulting in a virtuous cycle of increas-
ing adoption.
318
</content>'
- source_sentence: 'query: Explain how LightningFilter can enhance the performance
of traffic filtering beyond 100 Gbps.'
sentences:
- "passage:\n<url> https://github.com/netsec-ethz/scion-apps/blob/master/pkg/shttp/README.md\
\ </url>\n<type> code </type>\n<content>\n# HTTP over SCION\n\nThis package contains\
\ glue code to use the standard net/http libraries for HTTP\nover SCION.\n\nThis\
\ uses a QUIC session with a single stream as a transport, instead of the\nstandard\
\ TCP (for which we do not have an implementation on top of SCION).\nAs TLS is\
\ always enabled in QUIC, we use an insecure TLS session with self\nsigned certificates\
\ to get something similar to TCP for insecure HTTP.\nFor HTTPS, we'll have two\
\ TLS sessions; the insecure TLS for the basic\ntransport and on top of that,\
\ the \"normal\" TLS for the actual web content.\nThis may seem silly, and the\
\ net/http library provides enough hooks that would\nallow using the \"normal\"\
\ TLS session directly. However, only this setup allows\nto implement CONNECT,\
\ e.g. to proxy HTTPS traffic over HTTP.\n\n### Client\n\nWe use the standard\
\ net/http Client/Transport with a customized Dial function:\n\n```Go\n// Create\
\ a client with our Transport/Dialer:\nclient := &http.Client{\n Transport:\
\ shttp.DefaultTransport,\n}\n// Make requests as usual\nresp, err := client.Get(\"\
http://server:8080/download\")\n```\n\nHostnames are resolved by parsing the `/etc/hosts`\
\ file or by a RAINS lookup\n(see [Hostnames](../../README.md#Hostnames)).\nURLs\
\ potentially containing raw SCION addresses must be *mangled* before\npassing\
\ into the client (or any other place where they might be parsed as URL).\n```Go\n\
resp, err := client.Get(shttp.MangleSCIONURL(\"http://1-ff00:0:110,127.0.0.1:8080/download\"\
))\n```\n\n### Server\n\nThe server is used just like the standard net/http server;\
\ the handlers work\nall the same, only a custom listener is used for serving.\n\
\nExample:\n```Go\nhandler := http.FileServer(http.Dir(\"/usr/share/doc\"))))\n\
log.Fatal(shttp.ListenAndServe(\":80\", handler))\n```\n\n</content>"
- 'passage:
<url> https://docs.scion.org/en/latest/manuals/control.html </url>
<type> documentation </type>
<content>
If it is destroyed, the control service loses track of previously created key
epochs.
As key derivation depends on the epoch, keys that have previously been requested
/ derived,
will not match any newly created keys.
The DRKey system is broken for this AS, at least until all entities have fetched
new keys,
which may only happen after multiple epochs.
Defines hosts with privileged access to obtain the protocol and epoch specific
secret value (Level 0 key).
These hosts can locally derive keys shared with any remote AS, without having
to request
them individually from the control service.
However, the hosts must be trusted to not abuse this, as they can also create
keys
to impersonate any other host in the AS.
The set of hosts authorized to access the secret value for delegated key derivation
are specified as a list of IP addresses per supported DRKey protocol identifier.
```
# Example
[drkey.delegation]
scmp = ["203.0.113.17", "198.51.100.249"]
```
Maximum number of Level 1 keys that will be re-fetched preemptively before their
expiration.
### topology.json
The control service reads the control_service section of the topology.json file.
The entry referring to its own general.id
define the addresses that control will listen on.
The interface definitions in the border_router entries define the inter-AS links.
These entries define the beacons that control will originate and propagate.
### Beaconing policies
A beaconing policy is a YAML file, defining processing rules for path-segment
construction and
registration.
There are four policies with different but related purposes, that can individually
be configured
with the beacon.policies options:
Propagation is the process of receiving a beacon from a neighbor AS, extending
it
with one’s own AS entry and forwarding it to downstream neighbor ASes.
See Path Exploration (Beaconing).
The propagation policy determines which beacons are selected to be propagated
and how they are
extended.
Note that there is no separate policy type for beacon origination. The only policy
value
affecting origination is the MaxExpTime, which is
read from the propagation policy.
Registration is the process of making beacons available as path-segments to the
path lookup
process.
Beacons received from a neighbor AS are “terminated” by appending the own AS entry
and registered
in a path-segment database, from which it can be later found with path-segment
queries.
See Registration of Path Segments.
Applies to the registration of core-segments in the local path store of a core
AS.
Applies to the registration of up-segments in the local path store of a non-core
AS.
Applies to the registration of down-segments. The policy is used by a non-core
AS
to determine which down-segments it wants to make available to other ASes.
Each selected down-segments is registered, via a segment registration request,
in the core AS
that originated it.
Note
There is currently no corresponding policy that applies to the processing of segment
registration requests in the core AS.
From the description above, it is already evident that not all four policies are
applicable for
core and non-core ASes. Summarizing this:
| AS type | Applicable policies |
| --- | --- |
| core | Propagation, CoreRegistration |
| non-core | Propagation, UpRegistration, DownRegistration |
The beaconing policy YAML configuration considers the following options:
Restrict this policy configuration file to be used exclusively as one of the
beacon.policies options.
Only as sanity check and organization of configuration files. No operational effect.
Maximum number of segments to propagate/register per origin AS.
In the Propagation policy, this parameter determines the number of beacons
propagated to neighbor ASes per origin AS.
That is, for each originating AS, up to BestSetSize beacons are forwarded.
For the core-beaconing process, the set of originating ASes are all other core
ASes, which can
be very numerous.
Warning
</content>'
- 'passage:
<citation> Laurent Chuat et al.. *The Complete Guide to SCION. From Design Principles
to Formal Verification*. Springer International Publishing AG, 2022. </citation>
<type> book </type>
<page> 227 </page>
<content>
9.2 High-Speed Traffic Filtering with LightningFilter
9.1.5 Prerequisites for Replay Suppression
In summary, any protocol that should be protected by our replay-suppression
system must provide the following in each packet:
1. A timestamp with a precision of at least 100 ms (and a corresponding
global time synchronization) to filter out long-outdated packets and limit
the monitoring period;
2. A unique packet ID to be able to distinguish any two packets; and
3. Authentication of at least the timestamp and the unique packet ID.
The hash used to insert the packet into the Bloom filter can be either the unique
packet ID or the MAC with which the packet is authenticated: Given suffi-
ciently long MACs, collisions—while in principle possible—are highly un-
likely. The false-positive rate caused by these collisions is much lower than
that of the Bloom filters themselves.
9.2 High-Speed Traffic Filtering with
LightningFilter
Intrusion-detection systems and firewalls have become indispensable for de-
tecting and preventing a range of attacks in today’s Internet. Unfortunately,
far from being a panacea, these defense systems suffer from several short-
comings: First, traffic filtering is hindered by the ever more ubiquitous use
of end-to-end encryption. Indeed, deep packet inspection is impossible with-
out terminating encryption (and thus breaking end-to-end secrecy). Therefore,
firewalls are often demoted to filtering based on header attributes and packet
metadata. As these attributes are typically not authenticated, adversaries can
spoof their IP address and thus render filters ineffective in many cases. When
a firewall also incorporates VPN functionality, spoofed IP packets can have
significant impact, as VPNs have been shown to be extremely susceptible to
stateless flooding attacks [ 494]. Second, the complex filtering rules of modern
firewalls are computationally expensive to enforce. As a result, enterprise-
grade firewalls with a throughput beyond 100 Gbps can cost several hundred
thousand USD [ 40]. Furthermore, the advertised performance of firewalls is
often much lower if an adversary sends worst-case traffic in a denial-of-service
(DoS) attack [ 144]. Finally, despite the computational effort spent per packet,
firewalls and intrusion-detection systems suffer from substantial false-positive
and false-negative rates.
To remedy these issues, we developed LightningFilter, a high-speed traffic-
filtering mechanism that leverages DRKey to enable authenticated traffic shap-
ing based on the AS number of the source host. This provides the basis for
207
</content>'
- source_sentence: 'query: Why might a quad-pipeline Tofino switch be advantageous
for SCION border routers?'
sentences:
- "passage:\n<citation> Lars-Christian Schulz et al.. \"Cryptographic Path Validation\
\ for SCION in P4.\" *Proceedings of the 6th on European P4 Workshop*, 2023. </citation>\n\
<type> research paper </type>\n<page> 4 </page>\n<content>\nEuroP4 ’23, December\
\ 8, 2023, Paris, France Lars-Christian Schulz, Robin Wehner, and David Hausheer\n\
+\nKey Table\nT-Tables +\nKey Tablex3\nEgress ParserT-Tables +\nKey Tablex3\n\
0\nEgr. Deparser\nIng. Deparser\nIngress Parser\nT-Tables +\nKey Tablex3\nEgr.\
\ Deparser\nEgress Parser\nSubBytes \nShiftRows +\nKey Table\nIng. Deparser\n\
Ingress Parser\nIng. Deparser\nIngress Parser\nSCION \nBorder Router\nEgress Ports\n\
1\n2\nIngress Ports\nHidden Ports\nPipe Ingress EgressTraffic Mgr.\nEgr. Deparser\n\
Egress Parser\nSCION \nBorder Router\nFigure 4: 1BR+2AES configuration. SCION\
\ router imple-\nmented by 3 Tofino 2 pipelines. Packets pass through all\nthree\
\ pipelines in order, but no packet is recirculated in the\nsame pipeline, thus\
\ the overall switch has the same band-\nwidth as one of the pipelines.\n5 IMPLEMENTATION\
\ ON TOFINO 2\nWe implement our design1 in a quad-pipeline Intel Tofino Switch\n\
(Edgecore DCS810 using the Tofino BFN-T20-128Q ASIC). Each\nof the four pipelines\
\ is connected to eight of the total 32x 400G\nfront-panel Ethernet ports [\n\
1]. The pipelines can be loaded with\nindependent programs, effectively splitting\
\ one P4-programmable\nswitch into four. Additionally, any port of the switch\
\ can be con-\nfigured as a loopback port. Loopback ports become unavailable\n\
for connecting external devices, hosts etc. but can facilitate pass-\ning packets\
\ from pipeline to pipeline. By configuring all ports of a\npipeline as loopback\
\ and directing the flow of packets appropriately,\nwe can “fold” the pipelines\
\ to create a longer pipeline with more\nresources available.\n5.1 Folded Pipe\n\
Our router supports different pipeline configurations depending\non the number\
\ of available pipelines, desired usable port count and\ntotal bandwidth. The\
\ configuration1BR+2AES uses three of the four\navailable pipelines. Figure 4\
\ sketches the layout of operations in\nthe switch. A Packet enters the router\
\ in Pipe 0 where the packet\nprocessing and forwarding part of the border router\
\ is located. The\nborder router processes the packet and creates the bridge header\n\
containing the input for MAC calculation as well as the expected\nMACs. The packet\
\ is directed to Pipe 1 next, where it first encoun-\nters the egress processing\
\ of that pipe. While it may seem unusual\nthat the egress pipe is executed first,\
\ this is a consequence of the\nswitching happening in between ingress and egress.\
\ Once egress\nprocessing has finished, the packet begins ingress processing in\
\ the\nsame pipe, as all ports of this pipe have been configured as loop-\nback/recirculation\
\ ports. In the same way, the packet is directed to\nPipe 2. Pipe 1 and 2 implement\
\ the AES-CMAC validation. If the\nCMAC is valid, the packet is next forwarded\
\ to one of the ports\nin the original border router pipeline 0. Pipe 0’s ports\
\ operate in\nregular mode, thus the packet now leaves the router. If the MAC\
\ is\nnot valid, the packet is dropped by Pipe 2.\n1https://github.com/netsys-lab/scion-p4\n\
+\nKey Table\nT-Tables +\nKey Tablex3\nEgress ParserT-Tables +\nKey Tablex3\n\
0\nEgr. Deparser\nIng. Deparser\nIngress Parser\nIng. Deparser\nIngress Parser\n\
SCION \nBorder Router\nEgress Ports\n1\nIngress Ports\nPipe Ingress EgressTraffic\
\ Mgr.\nEgr. Deparser\nEgress Parser\nSCION \nBorder Router\n+\nKey Table\nT-Tables\
\ +\nKey Tablex3\nEgress ParserT-Tables +\nKey Tablex3\n2\nEgr. Deparser\nIng.\
\ Deparser\nIngress Parser\nIng. Deparser\nIngress Parser\nSCION \nBorder Router\n\
Egress Ports\n3\nIngress Ports\nEgr. Deparser\nEgress Parser\nSCION \nBorder Router\n\
Figure 5: 2BR+2AES configuration. Two instances of the\nSCION router implemented\
\ by 2 Tofino 2 pipelines each.\nPackets are recirculated once in pipe 1 and 3.\
\ Pipe 0-1 and\npipe 2-3 can operate as two independent border routers or as\n\
a single router with twice the number of ports.\nBy folding the pipelines as described,\
\ we lose the front-panel\nports of pipe 1 and 2, but we gain enough resources\
\ per pass through\nthe switch that no additional recirculation in the same pipeline\
\ is\nrequired, thus fitting the complete fast-path of a SCION router in a\nTofino\
\ 2 switch without impacting the per-port throughput.\nNot all Tofino 2 switches\
\ have the full four pipelines, models\nwith two pipelines are available as well.\
\ In order to fit our border\nrouter to a two pipeline Tofino, we can recirculate\
\ the packet once\nin the pipeline implementing AES, giving rise to the second\
\ imple-\nmentation variant 1BR+1AES. Doing so exactly halves the available\n\
bandwidth to a theoretical maximum of 1.6 Tbit/s, but as we leave\npipe 0 unchanged,\
\ the number of ports available for connections to\nother switches stays the same.\
\ As oversubscribing ports can also\nbe useful on quad-pipeline switches, it can\
\ also be desirable to in-\nstantiate two copies of the 2 pipe variant on a quad-pipeline\
\ switch\nas shown in Figure 5. We refer to this configuration as 2BR+2AES.\n\
An overview of the available ports and bandwidth in each con-\nfiguration is given\
\ in Table 1. Note that the number of usable ports\ngiven for 1BR+2AES, 1BR+1AES,\
\ and 2BR+2AES stays the same\nfor switches that do not connect all pipelines\
\ to externally accessi-\nble ports, making them particularly well suited as SCION\
\ border\nrouters.\nIn the next sections we detail the operation of the different\
\ pipes.\nConfiguration Pipes Ports Bandwidth\nBR w/o AES 4/4 32/32 12.8 Tbit/s\n\
1 BR + 2 AES 3/4 8/32 3.2 Tbit/s\n1 BR + 1 AES 2/4 8/32 1.6 Tbit/s\n2 BR + 2 AES\
\ 4/4 16/32 3.2 Tbit/s\nTable 1: Usable 400 Gbit/s ports and total available bandwidth\n\
taking recirculation into account for different configurations\nof the border\
\ router.\n20\n</content>"
- "passage:\n<url> https://github.com/netsys-lab/scion-p4/blob/master/tofino-crypto/aes_2pipes/README.md\
\ </url>\n<type> code </type>\n<content>\nAES-ECB Using 2 Pipelines\n=========================\n\
\n### Build\n```bash\nmake\n```\n\n### Run model\n```bash\nsudo $SDE_INSTALL/bin/veth_setup.sh\n\
\n${SDE}/run_tofino_model.sh --arch tofino2 -p aes_pipe0 --int-port-loop 0x3 \\\
\n -f ptf-tests/test_ports.json -c ptf-tests/test.conf\n${SDE}/run_switchd.sh\
\ --arch tf2 -c ptf-tests/test.conf\n```\n\n### PTF tests\n```bash\n${SDE}/run_p4_tests.sh\
\ --arch tf2 -f ptf-tests/test_ports.json -t ptf-tests\n```\n\nHeader\n------\n\
The program accept Ethernet packets with protocol type `0x9999` followed by the\n\
following header:\n\n```\n 0 1 2 \
\ 3\n 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1\n\
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n|A|B| Reserved\
\ | Reserved | User Data |\n+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n\
| | \\\n| \
\ Block 0 (16 bytes) | | If A is set\n| \
\ | /\n+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n\
| | \\\n| \
\ Block 1 (16 bytes) | | If B is set\n| \
\ | /\n+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n\
| |\n| \
\ Key (16 bytes) |\n| \
\ |\n+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n\
```\n\n- **Block 0** is encrypted with **Key** if flag **A** is set.\n- **Block\
\ 1** is encrypted with **Key** if flag **B** is set.\n- **User Data** is arbitrary\
\ data that is carried through the pipeline unchanged.\n\n</content>"
- 'passage:
<citation> Cyrill Krähenbühl, Seyedali Tabaeiaghdaei, Simon Scherrer, Matthias
Frei, Adrian Perrig. "Toward Global Latency Transparency." *IEEE/IFIP Networking,
recent results track*, 2024. </citation>
<type> research paper </type>
<page> 3 </page>
<content>
mercial SCION network provides a concrete example of such
a scenario. Figure 1 depicts the number of different paths we
can observe from ETH Z¨urich to 30 ASes located in 5 different
ISDs. Although the SCION network is small compared to
the Internet, the median number of paths to each destination
is over 100, and we expect this number to further increase
with the expansion of the commercial SCION Internet. The
endpoint could heuristically select and probe a subset of paths
but this does not guarantee finding latency-optimal paths. We
discuss a concrete algorithm for efficient path probing based
on GLIDS in Section III-C.
C. Challenges
In this section, we list the most important challenges en-
countered in designing a latency transparency system.
1) Variable Delays: One major challenge in latency estima-
tion is to correctly model all variable delays in the network [7].
The dominating variable delay is typically queuing delay due
to cross traffic filling up a packet queue on the path. The packet
processing and transmission delays are usually negligible in
comparison to the propagation and queuing delays, but may be
included in a latency measurement or calculated based on the
link bandwidth. Since the queuing delay cannot be predicted,
our work focuses on the predictable part of the end-to-end
latency, in particular the propagation delay.
2) Disclosing Internal Topology : While entities can en-
hance latency estimates by revealing detailed information
about their internal topologies, this might reveal sensitive
information to a competitor. It is thus imperative that partici-
pating entities can decide how much information is revealed.
3) Load Balancing and Variable Routes : Internet traffic
is rerouted for various reasons, e.g., economic impact, SLAs,
and bottlenecks. This can happen for inter-domain paths or for
intra-domain paths, e.g., due to traffic engineering or failing
links. Accurate latency estimation is challenging under these
circumstances and requires regularly updated measurements.
III. S YSTEM DESIGN
We propose a latency transparency system that measures
and distributes the propagation latency of inter-domain paths
at Internet scale and present an efficient path probing algorithm
for multipath networks. In designing such a system, we make
use of the facts that latency is an additive metric, and any inter-
domain path can be split into several intra-domain paths and
inter-domain links connecting them. The propagation latency
of an inter-domain path can thus be computed by accumulating
the propagation latencies of all intra-domain paths and inter-
domain links. Therefore, we divide GLIDS into two subsys-
tems: (1) the measurement system that accurately measures
the latency of intra-domain paths and inter-domain links, and
(2) the dissemination system that globally disseminates latency
information.
A. Performing Latency Measurements
In GLIDS, we focus on measuring propagation delay, in-
stead of modeling queuing delay. Note that depending on the
measurement method, the propagation delay measurement may
include the processing and transmission delay, but in practice,
they are typically negligible in comparison to propagation
and queuing delay. Hence, GLIDS requires a participating
AS to be able to measure the propagation latency of intra-
domain paths between their own border routers and of the
links connecting them to neighboring ASes’ border routers as
shown in Figure 2.
There exist a wide variety of ways to measure latency,
which can generally be separated into three groups [16]: (1)
traditional network measurements [17], [18], (2) SDN-based
measurements [19], [20], and (3) telemetry-based measure-
ments [21], [22], each with different tradeoffs. Since we are
interested in the one-way propagation latency, the network
operator must ensure that queuing delay is factored out, e.g.,
via packet prioritization or exclusion of “packet queue time”,
and that potential path asymmetry is taken into consideration,
e.g., using one-way latency measurements with synchronized
clocks instead of RTT measurements. Additionally, multiple
paths with varying latency may exist between border routers
due to redundancy, load balancing, or traffic engineering, and
paths may change over time. The network operator should
measure all possible paths and re-measure changed paths
to revoke the out-of-date path segment and re-disseminate
it. Optionally, the network operator may also enhance the
measurement with the used measurement methodology and a
level of measurement (un-)certainty, e.g., assigning a higher
level of certainty if a more sophisticated latency measurement
approach is used.
B. Disseminating Latency Information
To calculate the latency of an inter-domain path, the end-
points must be provided with the latencies of its constituent
intra-domain paths and inter-domain links. GLIDS achieves
this in a scalable fashion through path exploration and dis-
semination. In the exploration phase, each AS on an inter-
domain path encodes the latency information of its AS hop
in the forwarded path segment. In the dissemination phase,
the endpoint retrieves the path segments with the included
latency information. Note that GLIDS uses an opt-in approach
and an AS can choose to disclose latency information with
appropriate granularity based on the desired level of topology
secrecy. The privacy aspects are discussed in more detail in
Section V-A.
The latency information of each AS hop consists of the
propagation latency of the intra-domain path between the
ingress and the egress border routers (e.g., SF to NJ in
Figure 2) as well as the propagation latency of the inter-
domain links between the border routers of the neighboring
ASes (e.g., SF to LA). Due to possibly asymmetric latencies,
the AS encodes the latency in both directions. If the exact
intra-domain path each packet can take is not predictable,
e.g., due to load-balancing or backup routes, the AS must
disseminate the minimum propagation latency of all possible
routes—which is necessary for efficient path probing, see
Section II-B. In addition to this minimum latency, an AS
</content>'
- source_sentence: 'query: How does session resumption enhance security in SCION?'
sentences:
- 'passage:
<citation> Cyrill Krähenbühl et al.. "Pervasive Internet-Wide Low-Latency Authentication."
*Proceedings of the International Conference on Computer Communications and Networks
(ICCCN)*, 2021. </citation>
<type> research paper </type>
<page> 5 </page>
<content>
device’s certificate (C1) in addition to their endpoint certificate
(C2) and the signed message (M PILA). This requires endpoints
to trust the NAT device in the same way as an AS, i.e., mis-
behavior by NAT devices becomes detectable. The certificates
issued to endpoints behind the NAT device have the same IP
address and additionally specify the outgoing port numbers
as a local identifier as described in §IV-D. This allows other
endpoints to authenticate an endpoint based on its public IP
address and external port number (e.g., an endpoint providing a
service on a specific port can request a certificate which covers
this port). Port numbers are encoded in an X.509 extension in
the same way as IP addresses in resource certificates [30].
Multiple sequential NAT devices are supported as well. Each
NAT device issues certificates for NAT devices within its local
network, which can in turn issue certificates for endpoints
or NAT devices. Each nested NAT device thus requires an
additional certificate. This isolates different hosts behind the
NAT device and thus simplifies detection of misbehavior if a
NAT device issues certificates with overlapping port number
ranges for different entities. IPv6 solves IPv4 address shortage,
one of the main reasons for the widespread deployment of
NAT devices. We expect that with a growing IPv6 adoption,
less NAT devices would be required and PILA deployment
would become easier.
G. Session Resumption
If the underlying protocol supports session resumption, end-
points can combine the session resumption with a PILA hand-
shake and derive the keying material of the new session from
both sources. TLS 1.3 [35], for example, supports combining
pre-shared key and certificate-based authentication to increase
the security of a session [22]. The derived keying material is
authentic if either the pre-shared key derived from previous
keying material or the keying material produced by the PILA
handshake is authentic and no secret values were leaked. Since
PILA reduces the attack surface to the endpoints’s ASes,
authenticated session resumption over different ASes increases
the number of ASes that an attacker has to compromise in
order to launch a successful undetected MitM attack.
H. Downgrade Prevention
Whenever an initiator communicates with an unknown
responder, an attacker might perform a downgrade attack to
reduce the security to a less secure protocol (e.g., TOFU
protocol). An attacker attempts to convince the initiator that a
responder’s AS does not support PILA or that the responder
does not allow a specific PILA-supported protocol.
AS Downgrade. AS downgrade is prevented by locally
keeping a regularly updated list at each AS containing all
PILA-enabled ASes with their certificate service addresses.
Endpoints then request certificates for a specific AS or all
ASes that originate a specific IP address from their local
certificate service, which responds with a signed list of the
AS certificates.
Endpoint Downgrade. An AS that supports PILA must
provide proof that a service at a given IP address does not
allow a specific PILA-supported protocol to assure a sending
endpoint that its communication is not being downgraded. An
endpoint sends a request including a PILA-supported protocol,
an IP address, and the current time as a timestamp. The
certificate service replies with a signed proof that contains the
hash of the request and a (possible empty) list of certificate
entries valid at the requested time. A certificate entry consists
of the hash of the certificate and its validity period. The
endpoint then verifies the signature and that the returned list
is empty before falling back to a non-PILA protocol.
While these approaches for both the AS and endpoint down-
grade prevention method work well and are easy to implement,
they have a large computational overhead due to the signature
operation necessary to create each proof. A more elaborate
approach that scales better to a large number of requests is
organizing AS and endpoint certificates in public append-only
logs as in certificate transparency. The AS certificate log must
provide a globally consistent view of all AS certificates, while
the endpoint certificate log can be implemented as a separate
log per AS. Each log is accompanied by a verifiable log-
backed map [15], which provides a verifiable key-value store
that can efficiently derive proofs of presence for a specific key-
value mapping and proofs of absence for non-existing keys.
The log and the log-backed map only require one signature
operation per maximum merge delay (MMD) regardless of
the number of requests. The log-backed maps allow endpoints
to fetch an AS certificate for an AS number and a list of
certificate entries from an ⟨IP-address, protocol ⟩tuple.
V. S ECURITY ANALYSIS
The goal of PILA is to provide an initiator with an authentic
X.509 certificate of a responder, in the presence of an attacker
that can intercept, reorder, modify, and create arbitrary packets.
The underlying protocol uses this certificate to derive an
authentic key between the initiator and responder (session-
establishment protocol) or to verify the correctness of a
message signed by the responder (query–response protocol).
PILA provides the initiator with an authentic certificate if the
responder’s AS is honest or a CuBC attacker and the initiator,
responder, and global trust anchors are benign and none
of these entities are compromised. In mutual authentication,
both endpoints act as responders. The goal of an attacker
is to convince the initiator to accept a forged certificate by
performing a MitM attack, by downgrading to a non-PILA
connection, or by compromising a private key of a certificate
in the certificate chain. Additionally, we analyze attacks on
AS trust and denial-of-service (DoS) attacks.
MitM Attack. An attacker can perform a MitM attack to
impersonate an endpoint by providing a forged certificate to
the initiator. For protocols that establish secure sessions, this is
done by intercepting the handshake messages and simultane-
ously creating two separate connections with the initiator and
responder. For query–response protocols, the attacker modifies
the response and possibly the signature within the response.
If the endpoints resume sessions as described in §IV-G, an
</content>'
- 'passage:
<citation> João C. Pereira et al.. "Protocols to Code: Formal Verification of
a Next-Generation Internet Router." 2024. </citation>
<type> research paper </type>
<page> 15 </page>
<content>
verification of the Raft consensus protocol. In Jeremy Avigad and Adam
Chlipala, editors, Proceedings of the 5th ACM SIGPLAN Conference on
Certified Programs and Proofs, Saint Petersburg, FL, USA, January 20-22,
2016. ACM, 2016.
[54] Fuyuan Zhang, Limin Jia, Cristina Basescu, Tiffany Hyun-Jin Kim,
Yih-Chun Hu, and Adrian Perrig. Mechanized network origin and
path authenticity proofs. In Gail-Joon Ahn, Moti Yung, and Ninghui Li,
editors, Proceedings of the 2014 ACM SIGSAC Conference on Computer
and Communications Security, Scottsdale, AZ, USA, November 3-7, 2014 ,
pages 346–357. ACM, 2014.
[55] Jean Karim Zinzindohoué, Karthikeyan Bhargavan, Jonathan
Protzenko, and Benjamin Beurdouche. HACL*: A verified modern
cryptographic library. In Bhavani M. Thuraisingham, David Evans,
Tal Malkin, and Dongyan Xu, editors, Proceedings of the 2017 ACM
SIGSAC Conference on Computer and Communications Security, CCS
2017, Dallas, TX, USA, October 30 - November 03, 2017 , pages 1789–1806.
ACM, 2017.
A Reported Bugs and Improvements
A.1 Protocol Vulnerabilities
As mentioned in the paper, we found multiple protocol vul-
nerabilities in the early stages of this verification project,
which lead to five concrete attacks (see Table 1). Three of
these attacks related to subtle edge cases in the segment
switching logic, highlighting the logic’s complexity and the
necessity of its formal verification. The most severe attack
allows an attacker to create an arbitrary forwarding path,
hence violating all three security properties stated in the pa-
per. This attack exploits multiple vulnerabilities, in particular,
missing validation checks.
We reported all these forwarding protocol vulnerabilities
to the SCION developers, who resolved all but one minor
issue that is currently under discussion. Some attacks were
resolved directly, by adding additional checks to the router
code, others exploited vulnerabilities in former protocol ver-
sion of SCION [40] that have been resolved in SCION’s re-
design [14].
A.2 Protocol Improvement
Originally, SCION routers checked for valleys only when
switching between segments, as it was assumed that the con-
trol plane would correctly and securely construct internally
valley-free segments.
While modeling and carrying out our formal proofs, we
noticed that substantially stronger valley- and loop-freedom
properties can be achieved if valley checks are also added to
the intra-segment forwarding logic. In particular, we prove
that valley-freedom holds even if all on-path ASes are mali-
cious. Furthermore, this allows us to prove a stronger loop-
freedom property, stating that a loop can happen only if all
ASes in the loop are malicious (as opposed to at least one AS,
as we had previously).
A.3 Reported Implementation Bugs
In Table 2, we list all the thirteen issues we identified and re-
ported to the SCION developers. All issues were confirmed,
and seven of them have been fixed already. Additionally,
there are proposed solutions for three of the remaining is-
sues. Besides these issues, we identified a performance bug in
the functionprocessIntraBFD, due to the use of acontinue
statement instead of a break to exit early from a loop. This
report, however, is not a direct consequence of our verifica-
tion efforts because we are not using Gobra to reason about
the performance characteristics of the program. Finally, we
proposed two improvements to the SCION developers, which
were accepted: first, we suggested passing a large structure
by reference, instead of by value; second, we identified a
loop that could run for, at most, one iteration. In this case,
we replaced the loop with straight-line code. This had the
positive side-effect of simplifying our proofs, eliminating the
need for loop invariants and termination measures.
15
</content>'
- 'passage:
<citation> Laurent Chuat et al.. *The Complete Guide to SCION. From Design Principles
to Formal Verification*. Springer International Publishing AG, 2022. </citation>
<type> book </type>
<page> 404 </page>
<content>
15 Use Cases and Applications
parts of the Linux network stack, enabling significant performance improve-
ments compared to userspace applications.
Supporting even multipoint-to-multipoint transfers, Bittorrent over SCION
[198, 300] adds multipath support to the Bittorrent protocol. Based on the ini-
tial SCION Swarm work [ 445], it enables connections between peers over mul-
tiple paths. This way Bittorrent over SCION is able to aggregate the available
download and upload bandwidth to achieve high-speed transfers. Bittorrent
over SCION runs as a userspace application, but benefits from several opti-
mizations in the SCION libraries to achieve file transfers in the Gbit/s range.
Additionally, the application relies on a replacement of the SCION dispatcher
based on eBPF as described in § 12.1.1.
15.2.3 Multipath Video Streaming
Video streaming is also a promising candidate to benefit from multipath com-
munication. SCION Video Setup [ 199] is an ongoing project that focuses on
multipath video streaming over SCION, in order to increase the bandwidth and
reduce the latency for a better streaming experience.
Video calls are particularly sensitive to changing network conditions and
SCION presents the opportunity to leverage path-awareness to optimize video
call Quality of Experience (QoE). The SCION-WebRTC [ 204, 205] project
combines SCION and WebRTC in an iOS/macOS application and performs
video call-specific path selection at the application layer. Latency and packet
loss metrics are measured in real-time to discover the live conditions of numer-
ous available paths. SCION-WebRTC uses these QoS metrics to select paths
that have favorable network conditions and are likely to yield a high call QoE.
Since the app performs path selection at the application layer, WebRTC met-
rics like video/audio freezes, video resolution and video frame rate can also
be considered in the path selection process. These metrics more closely reflect
the call QoE than network QoS metrics do, and SCION-WebRTC continuously
monitors them during a call to decide whether to switch to a different path for
outgoing call traffic. This path selection strategy has shown to substantially
increase the quality of video calls in case of sudden bandwidth restrictions.
SCION-WebRTC also makes use of redundant transmission as an additional
method to increase the reliability of video calls. Call traffic, especially the
crit-
ical and low-bitrate audio track, can be sent redundantly over disjoint paths.
The redundancy is able to mask adverse events such as packet loss or high
packet delay variation as long as it does not affect all redundant packets. Ex-
periments have also shown the efficacy of this technique [ 204].
384
</content>'
- source_sentence: 'query: How does the SCION telemetry collector infer internal topology
information?'
sentences:
- 'passage:
<url> https://docs.scion.org/en/latest/cryptography/trc-signing-ceremony-phases-sensitive.html
</url>
<type> documentation </type>
<content>
```
cat << EOF > $TRCID.toml
isd = {{.ISD}}
description = {{.Description}}
serial_version = {{.SerialNumber}}
base_version = 1
grace_period = {{.GracePeriod}}
voting_quorum = {{.VotingQuorum}}
votes = {{.Votes}}
core_ases = {{.CoreASes}}
authoritative_ases = {{.AuthoritativeASes}}
cert_files = {{.CertFiles}}
no_trust_reset = false
[validity]
not_before = {{.NotBefore}}
not_after = {{.NotAfter}}
EOF
```
Display the payload template file with the variables filled-in on the device
monitor. The voting representatives should compare the contents of the file
with their answers to the previous questions, to ensure that all the data is
correct.
Once the data has been verified, compute the DER encoding of the TRC data:
```
scion-pki trcs payload --predecessor $PREDID.trc --template $TRCID.toml --out
$TRCID.pld.der
```
Compute the SHA256 sum of the TRC payload file using:
```
sha256sum $TRCID.pld.der
```
Connect the USB flash drive to your device, and copy the TRC payload file to
the root directory, then disconnect the USB flash drive. Hand out the USB flash
drive
to the voting representatives.
The voting representatives proceed to check the contents of the TRC payload
file by computing the SHA256 sum. Over the duration of the checks, keep the
SHA256 sum of the file available on the monitor for inspection.
This phase concludes once every voting representative confirms that the
contents of the TRC payload are correct. Once that happens, announce that
Phase 2 has successfully concluded.
### Phase 3 - Signing of the TRC Payload
This phase consists of the voting representatives casting votes on the TRC
payload file. Furthermore, all voting representatives that include a
previously not included certificate must show proof-of-possession, i.e., show
that they have access to the private key listed in these fresh certificates.
This is done by signing the TRC with the respective private key. The phase
concludes after all voting representatives have cast their votes, the
applicable parties have shown proof-of-possession, and copied the resulting
signatures onto the USB flash drive.
As part of this phase, the voting representatives inspect the TRC payload.
Display the TRC payload using:
```
scion-pki trc inspect $TRCID.pld.der
```
```
openssl asn1parse -i -in $TRCID.pld.der -inform der
# The asn1parse command is a diagnostic utility that can parse ASN.1 structures.
#
# -i: indent the output according to the depth in the structure.
# -in: the input file.
# -inform: the input format. We have an ASN.1 DER encoded structure.
```
Walk the voting representatives through the output and describe the meaning
and implications of each part.
Once every voting representative has finished the signing process, announce
that Phase 3 has successfully concluded.
### Phase 4 - Assembly of the TRC
This phase consists of assembling the final TRC by aggregating the payload data
with
the votes and proof-of-possessions (signatures) cast by the voting representatives.
Connect the USB flash drive to the device. Given the example data, the votes
should be available at the following locations on the USB flash drive:
- /bern/isd.sensitive.vote.trc
- /geneva/isd.sensitive.vote.trc
- /zürich/isd.sensitive.vote.trc
The proof-of-possessions for the freshly included certificates should be available
at the following locations on the USB flash drive:
- /bern/isd.sensitive.trc
- /bern/isd.regular.trc
- /geneva/isd.sensitive.trc
- /geneva/isd.regular.trc
- /zürich/isd.sensitive.trc
- /zürich/isd.regular.trc
To assemble the final TRC in a file, run the following command:
</content>'
- "passage:\n<url> https://github.com/scionproto-contrib/jpan/blob/master/doc/PathPolicyLanguage.md\
\ </url>\n<type> code </type>\n<content>\n# Path Policy Language\n\nThe path policy\
\ language is a way to specify complex path policies and\nexchange them via JSON\
\ files.\n\nJPAN supports a variant of the path policy language defined in the\n\
[Path Policy Language](https://docs.scion.org/en/latest/dev/design/PathPolicy.html).\n\
\nSpecifically, JPAN supports:\n\n* Path Language Policies (PPL) which consist\
\ of ACLs and Sequences as defined in\n [Path Policy Language](https://docs.scion.org/en/latest/dev/design/PathPolicy.html).\n\
\ ACLs and Sequences can\n* PPL groups (PPLG) which consist of multiple PPLs.\n\
\n## ACL\n\nAn ACL is a sequence of yes/no filters followed by a default behavior.\n\
The filters are processed in order. If a filter matches, the path is accepted\
\ or rejected\ndepending on the filter's setting. If no filter matches, the default\
\ behavior is applied.\n\nFor example, the following filter will accept (`+`)\
\ any path that contains the ISD-AS `1-ff00:0:133`\nor `1-ff00:0:120`. It will\
\ reject any other path going though ISD-AS `1`.\nAll other paths are accepted.\n\
\n```\nacl:\n - '+ 1-ff00:0:133'\n - '+ 1-ff00:0:120'\n - '- 1'\n \
\ - '+'\n```\n\nFor details please refer to the original specification\n[Path\
\ Policy Language](https://docs.scion.org/en/latest/dev/design/PathPolicy.html).\n\
\n## Sequence\n\nTHe following is copied from the original specification:\n\n\
### Operators\n\n```\n ? (the preceding HP may appear at most once)\n +\
\ (the preceding ISD-level HP must appear at least once)\n * (the preceding\
\ ISD-level HP may appear zero or more times)\n | (logical OR)\n```\n\nPlanned:\n\
\n```\n ! (logical NOT)\n & (logical AND)\n```\n\nThe sequence is a string\
\ of space separated HPs. The operators can be used for advanced interface\nsequences.\n\
\nThe following example specifies a path from any interface in AS 1-ff00:0:133\
\ to two subsequent\ninterfaces in AS `1-ff00:0:120` (entering on interface `2`\
\ and exiting on interface `1`), then there\nare two wildcards that each match\
\ any AS. The path must end with any interface in AS 1-ff00:0:110.\n\n```\n sequence:\
\ \"1-ff00:0:133#0 1-ff00:0:120#2,1 0 0 1-ff00:0:110#0\"\n```\n\nAny path that\
\ is matched by the above policy must traverse three transit ASes. In many cases\
\ the\nnumber of ASes or hops is not known. With the regex-style it is possible\
\ to express such sequences.\n\nThe following example specifies a path from interface\
\ `1-ff00:0:133#1` through multiple ASes in ISD\n`1`, that may (but does not need\
\ to) traverse AS `2-ff00:0:1` and then reaches its destination on\n`2-ff00:0:233#1`.\n\
\n```\n sequence: \"1-ff00:0:133#1 1+ 2-ff00:0:1? 2-ff00:0:233#1\"\n```\n\n##\
\ PPL Groups\n\nA PPL group (PPLG) i consists of a set of named PPLs and a set\
\ of filters that determine which\npolicy is used. The filters consists of:\n\n\
- ISD or `0` for catch all\n- optional: AS number, `0` for catch all\n- optional\
\ if AS is given: IP address\n- optional if IP is given: port number\n\nThere\
\ must be one `default` PPL with `0` that applies when no other PPL matches.\n\
\nPPLGs can be defined via API or via YAML or JSON files. For example:\n\n```yaml\n\
---\ngroup:\n - destination: \"1-0:0:110,10.0.0.2\"\n policy: policy_110a\n\
\ - destination: \"1-0:0:110\"\n policy: policy_110b\n - destination: \"\
0\"\n policy: default\n\npolicies:\n - name: default\n acl:\n - \"\
+ 1-ff00:0:111\",\n - \"+ 1-ff00:0:112\",\n - \"- 1\",\n - \"+\"\
\n - name: policy_110a\n \"sequence\": \"1-ff00:0:133#0 1-ff00:0:120#2,1 0\
\ 0 1-ff00:0:110#0\"\n - name: policy_110b\n acl:\n - \"+ 1-ff00:0:133\"\
,\n - \"+ 1-ff00:0:120\",\n - \"- 1\",\n - \"+\"\n```\n\n```json\n\
{\n \"group\": {\n \"1-0:0:110,10.0.0.2\": \"policy_110a\",\n \"1-0:0:110\"\
: \"policy_110b\",\n \"0\": \"default\"\n },\n \"policies\": {\n \"default\"\
: {\n \"acl\": [\n \"+ 1-ff00:0:111\",\n \"+ 1-ff00:0:112\"\
,\n \"- 1\",\n \"+\"\n ]\n },\n \"policy_110a\": {\n\
\ \"sequence\": \"1-ff00:0:133#0 1-ff00:0:120#2,1 0 0 1-ff00:0:110#0\"\n\
\ },\n \"policy_110b\": {\n \"acl\": [\n \"- 1-ff00:0:130#0\"\
,\n \"- 1-ff00:0:131#0\",\n \"- 1-ff00:0:132#0\",\n \"+\"\
\n ]\n }\n }\n}\n```\n</content>"
- 'passage:
<citation> Lars-Christian Schulz et al.. "ID-INT: Secure Inter-Domain In-Band
Telemetry." *2024 20th International Conference on Network and Service Management
(CNSM)*, 2024. </citation>
<type> research paper </type>
<page> 7 </page>
<content>
updated from SCION path servers. Additionally, the collector
infers internal topology information of the observed ASes, in
order to correctly map telemetry data to internal links. (2) The
telemetry data itself (latencies, link utilization, etc.) has to be
stored for later analysis. (3) The collector provides an API for
other end hosts to retrieve historical information on potential
paths to use for their routing decisions. The first function of
the collector is implemented with the help of a PostgreSQL
relational database storing a graph structure of known ASes,
routers, links, and paths. The second function is fulfilled by an
InfluxDB time-series database. We choose InfluxDB as there
is a rich ecosystem of analysis tools available for it. Finally,
the client API is designed as REST API offering endpoints
aggregated telemetry data from certain routers or links. We
note that, other INT collectors can be used with ID-INT as
well, but might require some modifications to make use of the
path information SCION inherently provides. Figure 5 shows
a flow chart of the collector’s operation.
Parse Report Collect AS-Level
Topology
Collect Device-Level
Topology
Collect Telemetry
Metadata
PostgreSQLInfluxDB
Fig. 5. ID-INT collector report processing pipeline.
VI. E VALUATION
We evaluated the ID-INT collector and ID-INT-enabled
border router on servers equipped with AMD Epyc 7543P
CPUs interconnected using Nvidia ConnectX NICs with link
speeds of at least 100 Gbit/s in order to avoid bottlenecks.
Evaluation traffic for the border router was generated using
an Intel Tofino 2 switch.
In our testing, the telemetry collector was able to process
approximately 50,000 reports/second with numbers fluctuating
slightly depending on the size of the reports. We found that the
collector was bottlenecked by the mapping of raw telemetry
reports to topology nodes stored in a relational database.
Higher performance would likely be possible if reports were
written directly to Influx DB. The latency of the collector,
which we define as the time from receiving a report to the
data being reflected in the databases, was measured at around
2.3 seconds, due to aggressive batching of database operations.
We measured the packet processing speed of the ID-INT-
enabled SCION border router to assess the overhead on
the router side. The measurements were carried out with
UDP/SCION packets containing a payload of 1000 bytes.
We varied the amount of requested telemetry data between
8 and 42 bytes which is the maximum possible. Our initial
results showed a 3% loss in packet throughput compared to no-
INT traffic when only telemetry authentication was requested.
If telemetry is also encrypted, throughput diminished by
up to 33%. We traced most of the performance impact to
the standard cryptography library used throughout SCION’s
code. In order to improve performance, we reimplemented
specialized versions of the cryptographic functions used by ID-
INT in C, making use of AES-NI instructions. As SCION is
implemented in Go, the functions must be called through Go’s
foreign function interface (FFI) cgo. The improved results in
Figure 6 show that ID-INT with custom cryptography via cgo
has no measurable impact on the border router throughout as
long as only authentication is required. Encryption still has
a performance impact of 9% to 13% depending on ID-INT
payload size.
No ID-INT 8 byte 16 byte 24 byte 32 byte 42 byte
0
0.5
1
1.5
2
2.5
2.8
3
1e5 packets/s
Auth. ID-INT (cgo)
Encrypted ID-INT (cgo)
Auth. ID-INT
Encrypted ID-INT
Fig. 6. Packet throughout of the ID-INT-enabled SCION border router for
different amounts of requested metadata compared to packets without ID-INT
headers. Throughput is shown for ID-INT with only authentication and with
additional encryption. Results marked (cgo) make use of AES-NI instructions
via Go’s cgo FFI. Error bars show 20% and 80% quantiles over 10 repetitions.
While packet throughput is the most important metric for
routers, ID-INT also affects applications goodput. The more
telemetry headers are inserted in the packet headers, the less
space remains for the application payload disincentivizing
applications to send ID-INT requests with every data packet
which will further reduce the burden on routers. We leave an
evaluation of ID-INT applications and application goodput to
future work.
VII. D ISCUSSION
In this section we discuss the deployment of ID-INT and
potential extensions to the protocol and its implementations.
A. Incremental Deployment
ID-INT does not require all routers on the path to be able to
fully parse the ID-INT header. SCION routers only need to be
able to recognize the ID-INT option and compute the header
length from the length field and length of the optional verifier
address, in order to access the transport header (typically
UDP). As the transport header is only strictly required by
the last border router on a path, it is sufficient if SCION
edge routers recognize ID-INT. Pure transit routers, that do
not allow packets to enter an AS, can remain unchanged.
Nevertheless, the value of deploying ID-INT increases with
2024 20th International Conference on Network and Service Management (CNSM)
</content>'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on intfloat/multilingual-e5-small
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dev evaluation
type: dev-evaluation
metrics:
- type: cosine_accuracy@1
value: 0.19641465315666407
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.4247856586126267
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.5226032735775527
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.6430241621200312
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.19641465315666407
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.1415952195375422
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.10452065471551054
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.06430241621200312
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.19641465315666407
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.4247856586126267
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5226032735775527
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6430241621200312
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.4080783748538108
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.3341112100854903
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.3451374323676864
name: Cosine Map@100
---
# SentenceTransformer based on intfloat/multilingual-e5-small
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small) <!-- at revision c007d7ef6fd86656326059b28395a7a03a7c5846 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 384 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tjohn327/scion-multilingual-e5-small")
# Run inference
sentences = [
'query: How does the SCION telemetry collector infer internal topology information?',
'passage:\n<citation> Lars-Christian Schulz et al.. "ID-INT: Secure Inter-Domain In-Band Telemetry." *2024 20th International Conference on Network and Service Management (CNSM)*, 2024. </citation>\n<type> research paper </type>\n<page> 7 </page>\n<content>\nupdated from SCION path servers. Additionally, the collector\ninfers internal topology information of the observed ASes, in\norder to correctly map telemetry data to internal links. (2) The\ntelemetry data itself (latencies, link utilization, etc.) has to be\nstored for later analysis. (3) The collector provides an API for\nother end hosts to retrieve historical information on potential\npaths to use for their routing decisions. The first function of\nthe collector is implemented with the help of a PostgreSQL\nrelational database storing a graph structure of known ASes,\nrouters, links, and paths. The second function is fulfilled by an\nInfluxDB time-series database. We choose InfluxDB as there\nis a rich ecosystem of analysis tools available for it. Finally,\nthe client API is designed as REST API offering endpoints\naggregated telemetry data from certain routers or links. We\nnote that, other INT collectors can be used with ID-INT as\nwell, but might require some modifications to make use of the\npath information SCION inherently provides. Figure 5 shows\na flow chart of the collector’s operation.\nParse Report Collect AS-Level\nTopology\nCollect Device-Level\nTopology\nCollect Telemetry\nMetadata\nPostgreSQLInfluxDB\nFig. 5. ID-INT collector report processing pipeline.\nVI. E VALUATION\nWe evaluated the ID-INT collector and ID-INT-enabled\nborder router on servers equipped with AMD Epyc 7543P\nCPUs interconnected using Nvidia ConnectX NICs with link\nspeeds of at least 100 Gbit/s in order to avoid bottlenecks.\nEvaluation traffic for the border router was generated using\nan Intel Tofino 2 switch.\nIn our testing, the telemetry collector was able to process\napproximately 50,000 reports/second with numbers fluctuating\nslightly depending on the size of the reports. We found that the\ncollector was bottlenecked by the mapping of raw telemetry\nreports to topology nodes stored in a relational database.\nHigher performance would likely be possible if reports were\nwritten directly to Influx DB. The latency of the collector,\nwhich we define as the time from receiving a report to the\ndata being reflected in the databases, was measured at around\n2.3 seconds, due to aggressive batching of database operations.\nWe measured the packet processing speed of the ID-INT-\nenabled SCION border router to assess the overhead on\nthe router side. The measurements were carried out with\nUDP/SCION packets containing a payload of 1000 bytes.\nWe varied the amount of requested telemetry data between\n8 and 42 bytes which is the maximum possible. Our initial\nresults showed a 3% loss in packet throughput compared to no-\nINT traffic when only telemetry authentication was requested.\nIf telemetry is also encrypted, throughput diminished by\nup to 33%. We traced most of the performance impact to\nthe standard cryptography library used throughout SCION’s\ncode. In order to improve performance, we reimplemented\nspecialized versions of the cryptographic functions used by ID-\nINT in C, making use of AES-NI instructions. As SCION is\nimplemented in Go, the functions must be called through Go’s\nforeign function interface (FFI) cgo. The improved results in\nFigure 6 show that ID-INT with custom cryptography via cgo\nhas no measurable impact on the border router throughout as\nlong as only authentication is required. Encryption still has\na performance impact of 9% to 13% depending on ID-INT\npayload size.\nNo ID-INT 8 byte 16 byte 24 byte 32 byte 42 byte\n0\n0.5\n1\n1.5\n2\n2.5\n2.8\n3\n1e5 packets/s\nAuth. ID-INT (cgo)\nEncrypted ID-INT (cgo)\nAuth. ID-INT\nEncrypted ID-INT\nFig. 6. Packet throughout of the ID-INT-enabled SCION border router for\ndifferent amounts of requested metadata compared to packets without ID-INT\nheaders. Throughput is shown for ID-INT with only authentication and with\nadditional encryption. Results marked (cgo) make use of AES-NI instructions\nvia Go’s cgo FFI. Error bars show 20% and 80% quantiles over 10 repetitions.\nWhile packet throughput is the most important metric for\nrouters, ID-INT also affects applications goodput. The more\ntelemetry headers are inserted in the packet headers, the less\nspace remains for the application payload disincentivizing\napplications to send ID-INT requests with every data packet\nwhich will further reduce the burden on routers. We leave an\nevaluation of ID-INT applications and application goodput to\nfuture work.\nVII. D ISCUSSION\nIn this section we discuss the deployment of ID-INT and\npotential extensions to the protocol and its implementations.\nA. Incremental Deployment\nID-INT does not require all routers on the path to be able to\nfully parse the ID-INT header. SCION routers only need to be\nable to recognize the ID-INT option and compute the header\nlength from the length field and length of the optional verifier\naddress, in order to access the transport header (typically\nUDP). As the transport header is only strictly required by\nthe last border router on a path, it is sufficient if SCION\nedge routers recognize ID-INT. Pure transit routers, that do\nnot allow packets to enter an AS, can remain unchanged.\nNevertheless, the value of deploying ID-INT increases with\n2024 20th International Conference on Network and Service Management (CNSM)\n</content>',
'passage:\n<url> https://docs.scion.org/en/latest/cryptography/trc-signing-ceremony-phases-sensitive.html </url>\n<type> documentation </type>\n<content>\n```\ncat << EOF > $TRCID.toml\nisd = {{.ISD}}\ndescription = {{.Description}}\nserial_version = {{.SerialNumber}}\nbase_version = 1\ngrace_period = {{.GracePeriod}}\nvoting_quorum = {{.VotingQuorum}}\nvotes = {{.Votes}}\ncore_ases = {{.CoreASes}}\nauthoritative_ases = {{.AuthoritativeASes}}\ncert_files = {{.CertFiles}}\nno_trust_reset = false\n\n[validity]\nnot_before = {{.NotBefore}}\nnot_after = {{.NotAfter}}\nEOF\n```\n\nDisplay the payload template file with the variables filled-in on the device\nmonitor. The voting representatives should compare the contents of the file\nwith their answers to the previous questions, to ensure that all the data is\ncorrect.\n\nOnce the data has been verified, compute the DER encoding of the TRC data:\n\n```\nscion-pki trcs payload --predecessor $PREDID.trc --template $TRCID.toml --out $TRCID.pld.der\n```\n\nCompute the SHA256 sum of the TRC payload file using:\n\n```\nsha256sum $TRCID.pld.der\n```\n\nConnect the USB flash drive to your device, and copy the TRC payload file to\nthe root directory, then disconnect the USB flash drive. Hand out the USB flash drive\nto the voting representatives.\n\nThe voting representatives proceed to check the contents of the TRC payload\nfile by computing the SHA256 sum. Over the duration of the checks, keep the\nSHA256 sum of the file available on the monitor for inspection.\n\nThis phase concludes once every voting representative confirms that the\ncontents of the TRC payload are correct. Once that happens, announce that\nPhase 2 has successfully concluded.\n\n### Phase 3 - Signing of the TRC Payload\n\nThis phase consists of the voting representatives casting votes on the TRC\npayload file. Furthermore, all voting representatives that include a\npreviously not included certificate must show proof-of-possession, i.e., show\nthat they have access to the private key listed in these fresh certificates.\nThis is done by signing the TRC with the respective private key. The phase\nconcludes after all voting representatives have cast their votes, the\napplicable parties have shown proof-of-possession, and copied the resulting\nsignatures onto the USB flash drive.\n\nAs part of this phase, the voting representatives inspect the TRC payload.\nDisplay the TRC payload using:\n\n```\nscion-pki trc inspect $TRCID.pld.der\n```\n\n```\nopenssl asn1parse -i -in $TRCID.pld.der -inform der\n\n# The asn1parse command is a diagnostic utility that can parse ASN.1 structures.\n#\n# -i: indent the output according to the depth in the structure.\n# -in: the input file.\n# -inform: the input format. We have an ASN.1 DER encoded structure.\n```\n\nWalk the voting representatives through the output and describe the meaning\nand implications of each part.\n\nOnce every voting representative has finished the signing process, announce\nthat Phase 3 has successfully concluded.\n\n### Phase 4 - Assembly of the TRC\n\nThis phase consists of assembling the final TRC by aggregating the payload data with\nthe votes and proof-of-possessions (signatures) cast by the voting representatives.\n\nConnect the USB flash drive to the device. Given the example data, the votes\nshould be available at the following locations on the USB flash drive:\n\n- /bern/isd.sensitive.vote.trc\n- /geneva/isd.sensitive.vote.trc\n- /zürich/isd.sensitive.vote.trc\n\nThe proof-of-possessions for the freshly included certificates should be available\nat the following locations on the USB flash drive:\n\n- /bern/isd.sensitive.trc\n- /bern/isd.regular.trc\n- /geneva/isd.sensitive.trc\n- /geneva/isd.regular.trc\n- /zürich/isd.sensitive.trc\n- /zürich/isd.regular.trc\n\nTo assemble the final TRC in a file, run the following command:\n\n\n</content>',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dev-evaluation`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.1964 |
| cosine_accuracy@3 | 0.4248 |
| cosine_accuracy@5 | 0.5226 |
| cosine_accuracy@10 | 0.643 |
| cosine_precision@1 | 0.1964 |
| cosine_precision@3 | 0.1416 |
| cosine_precision@5 | 0.1045 |
| cosine_precision@10 | 0.0643 |
| cosine_recall@1 | 0.1964 |
| cosine_recall@3 | 0.4248 |
| cosine_recall@5 | 0.5226 |
| cosine_recall@10 | 0.643 |
| **cosine_ndcg@10** | **0.4081** |
| cosine_mrr@10 | 0.3341 |
| cosine_map@100 | 0.3451 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 23,040 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:--------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 11 tokens</li><li>mean: 22.48 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 133 tokens</li><li>mean: 500.85 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 1.0</li><li>mean: 1.0</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:-------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>query: What is the purpose of the FuzzLayers target in SCION fuzzing?</code> | <code>passage:<br><url> https://github.com/scionproto/scion/blob/master/pkg/slayers/internal/fuzz/README.md </url><br><type> code </type><br><content><br># Fuzzing Targets for slayers<br><br>This package contains the fuzzing targets for the slayers package.<br>There are multiple targets defined. The default `Fuzz` target fuzzes<br>a full SCION packet decoding run. `FuzzLayers` fuzzes individual layers.<br>Which layer that is fuzzed is determined by the first byte of the input.<br>Furthermore, there is one target per layer for individual fuzzing.<br><br>## Installation<br><br>To run fuzzing in your local environment, you need to have `go-fuzz` and<br>`go-fuzz-build` available in your path.<br><br>See: [go-fuzz](https://github.com/dvyukov/go-fuzz)<br><br>## Start fuzzing<br><br>To start fuzzing, navigate to this directory and run:<br><br>```bash<br>go-fuzz-build --func Fuzz<br>cp -r ../../testdata corpus<br>go-fuzz<br>```<br><br>To run a different target, run:<br><br>```bash<br>go-fuzz --func FuzzSCION<br>```<br><br>## Debugging crashers<br><br>Crashers will be stored in the `crashers` directory. Per c...</code> | <code>1.0</code> |
| <code>query: Why is user involvement considered crucial for network properties in SCION?</code> | <code>passage:<br><citation> Alex Davidson et al.. "Tango or Square Dance? How Tightly Should we Integrate Network Functionality in Browsers?." *to appear in Proceedings of the ACM Workshop on Hot Topics in Networks (HotNets)*, 2022. </citation><br><type> research paper </type><br><page> 6 </page><br><content><br>the TCP/IP File Server (grey host) provides resources over<br>TCP/IP. The HTTP proxy can establish connections both to<br>SCION and TCP/IP servers. These experiments compare the<br>Page Load Time (PLT) running the extension compared to<br>the PLT for the standard browsing experience.<br>The box plots in Figure 3 depict four experiments. The<br>SCION-only experiment shows the load time for a static web-<br>site in which all resources are located on the SCION FS. In<br>the mixed SCION-IPexperiment, the HTTP proxy fetches re-<br>sources from both servers. In the strict-SCION experiment,<br>the browser extension runs inStrict-SCION mode, thus only<br>requesting SCION resources and blocking all others. In this<br>experiment, only one res...</code> | <code>1.0</code> |
| <code>query: Why is it important for endpoints to have the ability to choose between multiple path options?</code> | <code>passage:<br><url> https://www.ietf.org/archive/id/draft-dekater-scion-controlplane-07.txt </url><br><type> specification </type><br><content><br>Network Working Group C. de Kater<br>Internet-Draft N. Rustignoli<br>Intended status: Informational SCION Association<br>Expires: 27 June 2025 S. Hitz<br> Anapaya Systems<br> 24 December 2024<br><br><br> SCION Control Plane<br> draft-dekater-scion-controlplane-07<br><br>Abstract<br><br> This document describes the Control Plane of the path-aware, inter-<br> domain network architecture SCION (Scalability, Control, and<br> Isolation On Next-generation networks). One of the basic<br> characteristics of SCION is that it gives path control to SCION-<br> capable endpoints that can choose between multipl...</code> | <code>1.0</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `fp16`: True
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 64
- `per_device_eval_batch_size`: 64
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 3
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | Training Loss | dev-evaluation_cosine_ndcg@10 |
|:------:|:----:|:-------------:|:-----------------------------:|
| 1.0 | 180 | - | 0.3847 |
| 2.0 | 360 | - | 0.4009 |
| 2.7778 | 500 | 1.2687 | - |
| 3.0 | 540 | - | 0.4081 |
### Framework Versions
- Python: 3.12.3
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.6.0+cu124
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
honeia11/resnet-18 | honeia11 | 2025-03-04T13:25:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"resnet",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2025-03-04T13:24:52Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Legalaz/14_llambocm4_08_14 | Legalaz | 2025-03-04T13:24:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-04T13:18:07Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# top
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* /root/top1
* /root/top2
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /root/top2
parameters:
weight: 0.9131
- model: /root/top1
parameters:
weight: 0.0628
merge_method: linear
dtype: bfloat16
```
|
MikeRoz/Steelskull_L3.3-San-Mai-R1-70b-2.25bpw-h6-exl2 | MikeRoz | 2025-03-04T13:23:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"conversational",
"base_model:TheSkullery/L3.1x3.3-DS-Hydroblated-R1-70B-v4.1",
"base_model:quantized:TheSkullery/L3.1x3.3-DS-Hydroblated-R1-70B-v4.1",
"license:llama3.3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | 2025-03-04T12:10:15Z | ---
base_model:
- TheSkullery/L3.1x3.3-DS-Hydroblated-R1-70B-v4.1
library_name: transformers
tags:
- merge
license: llama3.3
---
<!DOCTYPE html>
<style>
/* Base styles */
body {
font-family: 'Quicksand', sans-serif;
background: #000000;
color: #e0e0e0;
margin: 0;
padding: 0;
font-size: 16px;
min-height: 100vh;
position: relative;
}
body::before {
content: '';
position: fixed;
top: 0;
left: 0;
width: 100%;
height: 100%;
background:
/* Dense tiny stars */
radial-gradient(0.5px 0.5px at 25px 35px, rgba(255, 255, 255, 0.95) 50%, transparent 50%),
radial-gradient(0.5px 0.5px at 45px 75px, rgba(255, 255, 255, 0.95) 50%, transparent 50%),
radial-gradient(0.5px 0.5px at 55px 165px, rgba(255, 255, 255, 0.95) 50%, transparent 50%),
radial-gradient(0.5px 0.5px at 95px 45px, rgba(255, 255, 255, 0.95) 50%, transparent 50%),
radial-gradient(0.5px 0.5px at 135px 85px, rgba(255, 255, 255, 0.95) 50%, transparent 50%),
radial-gradient(0.5px 0.5px at 165px 125px, rgba(255, 255, 255, 0.95) 50%, transparent 50%),
radial-gradient(0.5px 0.5px at 185px 145px, rgba(255, 255, 255, 0.95) 50%, transparent 50%),
radial-gradient(0.5px 0.5px at 215px 175px, rgba(255, 255, 255, 0.95) 50%, transparent 50%),
/* Small stars */
radial-gradient(1px 1px at 155px 35px, rgba(255, 255, 255, 0.9) 50%, transparent 50%),
radial-gradient(1px 1px at 255px 75px, rgba(255, 255, 255, 0.9) 50%, transparent 50%),
radial-gradient(1px 1px at 355px 165px, rgba(255, 255, 255, 0.9) 50%, transparent 50%),
radial-gradient(1px 1px at 75px 195px, rgba(255, 255, 255, 0.9) 50%, transparent 50%),
radial-gradient(1px 1px at 175px 275px, rgba(255, 255, 255, 0.9) 50%, transparent 50%),
radial-gradient(1px 1px at 225px 315px, rgba(255, 255, 255, 0.9) 50%, transparent 50%),
/* Medium stars */
radial-gradient(1.5px 1.5px at 205px 35px, rgba(255, 255, 255, 0.85) 50%, transparent 50%),
radial-gradient(1.5px 1.5px at 305px 155px, rgba(255, 255, 255, 0.85) 50%, transparent 50%),
radial-gradient(1.5px 1.5px at 405px 55px, rgba(255, 255, 255, 0.85) 50%, transparent 50%),
/* Clustered splatter */
radial-gradient(3px 3px at 100px 200px, rgba(255, 255, 255, 0.15) 50%, transparent 50%),
radial-gradient(4px 4px at 105px 205px, rgba(255, 255, 255, 0.1) 50%, transparent 50%),
radial-gradient(5px 5px at 95px 195px, rgba(255, 255, 255, 0.12) 50%, transparent 50%),
radial-gradient(3px 3px at 300px 250px, rgba(255, 255, 255, 0.15) 50%, transparent 50%),
radial-gradient(4px 4px at 305px 255px, rgba(255, 255, 255, 0.1) 50%, transparent 50%),
radial-gradient(5px 5px at 295px 245px, rgba(255, 255, 255, 0.12) 50%, transparent 50%),
/* Random dots */
radial-gradient(0.8px 0.8px at 455px 85px, rgba(255, 255, 255, 0.8) 50%, transparent 50%),
radial-gradient(0.8px 0.8px at 505px 125px, rgba(255, 255, 255, 0.8) 50%, transparent 50%),
radial-gradient(0.8px 0.8px at 525px 165px, rgba(255, 255, 255, 0.8) 50%, transparent 50%),
radial-gradient(0.8px 0.8px at 475px 195px, rgba(255, 255, 255, 0.8) 50%, transparent 50%),
radial-gradient(0.8px 0.8px at 495px 225px, rgba(255, 255, 255, 0.8) 50%, transparent 50%),
radial-gradient(0.8px 0.8px at 515px 255px, rgba(255, 255, 255, 0.8) 50%, transparent 50%),
/* Large splatter clusters */
radial-gradient(15px 15px at 150px 250px, rgba(255, 255, 255, 0.05) 50%, transparent 50%),
radial-gradient(12px 12px at 155px 255px, rgba(255, 255, 255, 0.07) 50%, transparent 50%),
radial-gradient(10px 10px at 145px 245px, rgba(255, 255, 255, 0.06) 50%, transparent 50%),
radial-gradient(18px 18px at 350px 300px, rgba(255, 255, 255, 0.05) 50%, transparent 50%),
radial-gradient(15px 15px at 355px 305px, rgba(255, 255, 255, 0.07) 50%, transparent 50%),
radial-gradient(12px 12px at 345px 295px, rgba(255, 255, 255, 0.06) 50%, transparent 50%),
/* Extra large splatter */
radial-gradient(25px 25px at 200px 400px, rgba(255, 255, 255, 0.03) 50%, transparent 50%),
radial-gradient(20px 20px at 205px 405px, rgba(255, 255, 255, 0.04) 50%, transparent 50%),
radial-gradient(30px 30px at 195px 395px, rgba(255, 255, 255, 0.02) 50%, transparent 50%),
radial-gradient(35px 35px at 500px 450px, rgba(255, 255, 255, 0.03) 50%, transparent 50%),
radial-gradient(28px 28px at 505px 455px, rgba(255, 255, 255, 0.04) 50%, transparent 50%),
radial-gradient(40px 40px at 495px 445px, rgba(255, 255, 255, 0.02) 50%, transparent 50%);
background-repeat: repeat;
background-size: 600px 600px;
pointer-events: none;
z-index: 0;
opacity: 0.6;
animation: starTwinkle 5s infinite alternate;
}
@keyframes starTwinkle {
0% { opacity: 0.4; }
50% { opacity: 0.6; }
100% { opacity: 0.8; }
}
.container {
max-width: 1200px;
margin: 40px auto;
background-color: rgba(10, 10, 10, 0.97);
padding: 40px;
border: 1px solid rgb(196, 207, 219);
position: relative;
backdrop-filter: blur(10px);
overflow: hidden;
clip-path: polygon(
0 15px, 15px 0,
calc(100% - 15px) 0, 100% 15px,
100% calc(100% - 15px), calc(100% - 15px) 100%,
15px 100%, 0 calc(100% - 15px)
);
}
.container::after {
content: '';
position: absolute;
inset: 0;
background:
linear-gradient(90deg, transparent 49.5%, rgb(196, 207, 219) 49.5%, rgb(196, 207, 219) 50.5%, transparent 50.5%) 0 0/30px 100%,
linear-gradient(0deg, transparent 49.5%, rgb(196, 207, 219) 49.5%, rgb(196, 207, 219) 50.5%, transparent 50.5%) 0 0/100% 30px;
opacity: 0.1;
pointer-events: none;
z-index: 0;
}
.container::before {
content: '';
position: absolute;
inset: -1px;
background: linear-gradient(45deg, rgb(196, 207, 219), transparent 70%);
opacity: 0.2;
z-index: -1;
}
@media (max-width: 1280px) {
.container {
margin: 20px;
padding: 30px;
}
}
/* Typography */
h1, h2, h3, h4 {
color: #ffffff;
text-shadow: 0 0 10px rgba(254, 105, 118, 0.2);
letter-spacing: 2px;
margin: 0 0 20px 0;
font-weight: 600;
position: relative;
padding-left: 15px;
text-transform: uppercase;
}
h1::before, h2::before, h3::before, h4::before {
content: '';
position: absolute;
left: 0;
top: 50%;
width: 8px;
height: 2px;
background: rgb(254, 105, 118);
transform: translateY(-50%) skewX(-20deg);
}
h1 { font-size: 36px; }
h2 { font-size: 28px; }
h3 { font-size: 24px; }
h4 { font-size: 20px; }
p {
line-height: 1.8;
color: #ffffff;
margin: 0 0 15px 0;
position: relative;
padding-left: 15px;
}
p::before {
content: '';
position: absolute;
left: 0;
top: 0.8em;
width: 8px;
height: 1px;
background: rgb(196, 207, 219);
transform: skewX(-20deg);
}
/* Links */
a {
color: rgb(254, 105, 118);
text-decoration: none;
transition: all 0.3s ease;
position: relative;
padding: 0 5px;
}
a:hover {
color: #ffffff;
background: rgba(254, 105, 118, 0.1);
}
a::before, a::after {
content: '';
position: absolute;
width: 2px;
height: 0;
background: rgb(196, 207, 219);
transition: height 0.3s ease;
}
a::before {
left: 0;
top: 0;
}
a::after {
right: 0;
bottom: 0;
}
a:hover::before, a:hover::after {
height: 100%;
}
@keyframes linkUnderline {
from { transform: scaleX(0); }
to { transform: scaleX(1); }
}
/* Code elements */
pre {
background-color: rgba(26, 26, 26, 0.95);
padding: 15px;
border-radius: 4px;
overflow-x: auto;
border: 1px solid rgba(196, 207, 219, 0.2);
position: relative;
}
pre::before {
content: '';
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
background: linear-gradient(45deg, transparent 48%, rgba(196, 207, 219, 0.1) 49%, rgba(196, 207, 219, 0.1) 51%, transparent 52%);
background-size: 10px 10px;
pointer-events: none;
}
code {
font-family: 'Courier New', monospace;
color: #E0E0E0;
}
/* Section spacing */
.section-container {
margin: 40px 0;
position: relative;
}
.section-container::before {
content: '';
position: absolute;
top: -10px;
left: 0;
width: 50px;
height: 2px;
background: rgb(196, 207, 219);
transform: skewX(-20deg);
}
/* Support section */
.support-section,
.benchmark-container,
.info-card,
.template-card,
.quantized-section,
.settings-card {
margin-top: 40px;
padding: 30px;
background: rgba(26, 26, 26, 0.95);
border: 1px solid rgb(196, 207, 219);
border-radius: 4px;
position: relative;
overflow: hidden;
z-index: 1;
}
.support-section::before {
content: '';
position: absolute;
top: 0;
right: 0;
width: 100px;
height: 100px;
background: radial-gradient(circle at top right, rgba(196, 207, 219, 0.1), transparent 70%);
pointer-events: none;
}
/* Ensure content is above the geometric pattern */
.model-info,
.metrics-section,
.section-container,
.support-buttons,
.model-composition,
.info-header,
.template-content,
.quantized-items {
position: relative;
z-index: 1;
}
.support-buttons {
display: flex;
gap: 15px;
flex-wrap: wrap;
position: relative;
z-index: 1;
}
/* Button styles */
.button {
display: inline-flex;
align-items: center;
gap: 8px;
padding: 10px 20px;
background: rgba(196, 207, 219, 0.05);
border: 1px solid rgb(196, 207, 219);
border-radius: 0;
color: rgb(196, 207, 219);
font-weight: 500;
text-decoration: none;
transition: all 0.3s ease;
position: relative;
overflow: hidden;
text-transform: uppercase;
letter-spacing: 1px;
clip-path: polygon(0 0, calc(100% - 10px) 0, 100% 10px, 100% 100%, 10px 100%, 0 calc(100% - 10px));
box-shadow: 0 0 15px rgba(196, 207, 219, 0.1);
}
.button::before {
content: '';
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
background: linear-gradient(45deg, transparent 48%, rgba(196, 207, 219, 0.1) 49%, rgba(196, 207, 219, 0.1) 51%, transparent 52%);
background-size: 8px 8px;
pointer-events: none;
opacity: 0.5;
}
.button::after {
content: '';
position: absolute;
inset: -1px;
pointer-events: none;
background:
linear-gradient(to right, rgb(196, 207, 219) 8px, transparent 8px) top left,
linear-gradient(to bottom, rgb(196, 207, 219) 8px, transparent 8px) top left,
linear-gradient(to left, rgb(196, 207, 219) 8px, transparent 8px) bottom right,
linear-gradient(to top, rgb(196, 207, 219) 8px, transparent 8px) bottom right;
background-size: 20px 1px, 1px 20px, 20px 1px, 1px 20px;
background-repeat: no-repeat;
opacity: 0.4;
}
.button:hover {
background: rgba(254, 105, 118, 0.1);
border-color: rgb(254, 105, 118);
transform: translateY(-1px);
box-shadow: 0 0 20px rgba(254, 105, 118, 0.1);
color: rgb(254, 105, 118);
text-shadow: 0 0 5px rgba(254, 105, 118, 0.3);
}
.button:active {
transform: translateY(0);
}
/* Template link */
.template-link {
display: flex;
align-items: center;
gap: 5px;
color: rgb(196, 207, 219);
font-weight: 500;
padding: 8px 12px;
border-radius: 0;
background: rgba(196, 207, 219, 0.05);
border: 1px solid rgb(196, 207, 219);
transition: all 0.3s ease;
position: relative;
overflow: hidden;
text-transform: uppercase;
letter-spacing: 1px;
clip-path: polygon(0 0, calc(100% - 8px) 0, 100% 8px, 100% 100%, 8px 100%, 0 calc(100% - 8px));
box-shadow: 0 0 10px rgba(196, 207, 219, 0.1);
}
.template-link::before {
content: '';
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
background: linear-gradient(45deg, transparent 48%, rgba(196, 207, 219, 0.1) 49%, rgba(196, 207, 219, 0.1) 51%, transparent 52%);
background-size: 10px 10px;
pointer-events: none;
}
.template-link::after {
content: '';
position: absolute;
inset: -1px;
pointer-events: none;
background:
linear-gradient(to right, rgb(196, 207, 219) 8px, transparent 8px) top left,
linear-gradient(to bottom, rgb(196, 207, 219) 8px, transparent 8px) top left,
linear-gradient(to left, rgb(196, 207, 219) 8px, transparent 8px) bottom right,
linear-gradient(to top, rgb(196, 207, 219) 8px, transparent 8px) bottom right;
background-size: 20px 1px, 1px 20px, 20px 1px, 1px 20px;
background-repeat: no-repeat;
opacity: 0.4;
}
.template-link:hover {
background: rgba(254, 105, 118, 0.1);
border-color: rgb(254, 105, 118);
color: rgb(254, 105, 118);
text-shadow: 0 0 5px rgba(254, 105, 118, 0.3);
box-shadow: 0 0 15px rgba(254, 105, 118, 0.1);
}
.link-arrow {
font-size: 18px;
line-height: 1;
transform: translateY(1px);
}
/* Template content */
.template-content {
display: flex;
align-items: center;
gap: 10px;
position: relative;
z-index: 1;
}
.template-author {
color: rgba(196, 207, 219, 0.7);
font-size: 14px;
}
/* Info card */
.info-card {
background: rgba(32, 32, 32, 0.95);
border: 1px solid rgb(196, 207, 219);
border-radius: 0;
overflow: hidden;
position: relative;
box-shadow: 0 0 20px rgba(196, 207, 219, 0.1);
clip-path: polygon(0 0, calc(100% - 15px) 0, 100% 15px, 100% 100%, 15px 100%, 0 calc(100% - 15px));
}
.info-header {
background: rgba(196, 207, 219, 0.05);
padding: 20px;
border-bottom: 1px solid rgb(196, 207, 219);
position: relative;
clip-path: polygon(0 0, 100% 0, 100% calc(100% - 15px), calc(100% - 15px) 100%, 0 100%);
}
.info-header h3 {
margin: 0 0 10px 0;
}
/* Model tags */
.model-tags {
display: flex;
gap: 8px;
flex-wrap: wrap;
}
.model-tag {
background: rgba(196, 207, 219, 0.05);
color: rgb(196, 207, 219);
padding: 4px 12px;
border-radius: 0;
font-size: 12px;
border: 1px solid rgb(196, 207, 219);
position: relative;
overflow: hidden;
clip-path: polygon(0 0, calc(100% - 8px) 0, 100% 8px, 100% 100%, 8px 100%, 0 calc(100% - 8px));
text-transform: uppercase;
letter-spacing: 1px;
}
.model-tag::before {
content: '';
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
background: linear-gradient(45deg, transparent 48%, rgba(196, 207, 219, 0.1) 49%, rgba(196, 207, 219, 0.1) 51%, transparent 52%);
background-size: 10px 10px;
}
/* Model composition */
.model-composition {
padding: 20px;
border-bottom: 1px solid rgba(196, 207, 219, 0.2);
position: relative;
}
.composition-list {
list-style: none;
padding: 0;
margin: 0;
display: grid;
gap: 12px;
}
.composition-list li {
color: #E0E0E0;
display: flex;
align-items: baseline;
gap: 12px;
padding-left: 20px;
position: relative;
}
.composition-list li::before {
content: '';
position: absolute;
left: 0;
top: 50%;
width: 8px;
height: 2px;
background: rgb(196, 207, 219);
transform: translateY(-50%) skewX(-20deg);
}
.model-component {
color: rgb(254, 105, 118);
font-weight: 500;
min-width: 120px;
text-shadow: 0 0 5px rgba(254, 105, 118, 0.3);
letter-spacing: 1px;
}
/* Model description */
.model-description {
background: rgba(26, 26, 26, 0.95);
border: 1px solid rgb(196, 207, 219);
border-radius: 0;
padding: 20px;
position: relative;
overflow: hidden;
}
/* Template card */
.template-card {
background: rgba(26, 26, 26, 0.95);
border: 1px solid rgb(196, 207, 219);
border-radius: 0;
padding: 20px;
position: relative;
overflow: hidden;
}
/* Quantized section cards */
.quantized-container {
display: grid;
gap: 20px;
}
.quantized-section {
background: rgba(26, 26, 26, 0.95);
border: 1px solid rgb(196, 207, 219);
border-radius: 0;
padding: 20px;
position: relative;
overflow: hidden;
clip-path: polygon(0 0, calc(100% - 15px) 0, 100% 15px, 100% 100%, 15px 100%, 0 calc(100% - 15px));
box-shadow: 0 0 20px rgba(196, 207, 219, 0.1);
}
.quantized-items {
display: grid;
gap: 12px;
}
.quantized-item {
display: flex;
align-items: baseline;
gap: 12px;
position: relative;
}
.quantized-item .author {
color: rgba(224, 224, 224, 0.7);
min-width: 100px;
position: relative;
padding-left: 15px;
}
.quantized-item .author::before {
content: '';
position: absolute;
left: 0;
top: 50%;
width: 8px;
height: 2px;
background: rgb(196, 207, 219);
transform: translateY(-50%) skewX(-20deg);
}
.multi-links {
display: flex;
align-items: center;
gap: 12px;
}
.separator {
color: rgba(196, 207, 219, 0.5);
transform: skewX(-20deg);
}
/* Config cards */
.config-container {
background: rgba(26, 26, 26, 0.95);
border: 1px solid rgb(196, 207, 219);
border-radius: 0;
overflow: hidden;
position: relative;
}
.config-header {
background: rgba(196, 207, 219, 0.05);
padding: 15px 20px;
border-bottom: 1px solid rgba(196, 207, 219, 0.2);
position: relative;
}
.model-name {
color: rgb(196, 207, 219);
font-weight: 600;
}
.config-content {
padding: 20px;
}
.config-item {
display: flex;
flex-direction: column;
gap: 5px;
margin-bottom: 15px;
position: relative;
padding-left: 15px;
}
.config-item::before {
content: '';
position: absolute;
left: 0;
top: 10px;
width: 8px;
height: 2px;
background: rgb(196, 207, 219);
transform: skewX(-20deg);
}
.config-label {
color: rgb(196, 207, 219);
font-size: 14px;
font-weight: 500;
}
.config-value {
color: #E0E0E0;
font-family: 'Courier New', monospace;
}
/* Settings grid */
.settings-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(250px, 1fr));
gap: 20px;
margin-top: 20px;
}
.settings-card {
background: rgba(32, 32, 32, 0.95);
border: 1px solid rgb(196, 207, 219);
border-radius: 0;
position: relative;
overflow: hidden;
clip-path: polygon(0 0, calc(100% - 10px) 0, 100% 10px, 100% 100%, 10px 100%, 0 calc(100% - 10px));
}
.settings-header {
background: rgba(196, 207, 219, 0.05);
padding: 15px 20px;
border-bottom: 1px solid rgb(196, 207, 219);
}
.settings-header h3 {
margin: 0;
color: rgb(196, 207, 219);
font-size: 1.1em;
}
.settings-author {
display: block;
font-size: 0.9em;
color: rgba(224, 224, 224, 0.7);
margin-top: 5px;
}
.settings-content {
padding: 15px 20px;
}
.setting-item {
display: flex;
justify-content: space-between;
align-items: center;
padding: 8px 0;
border-bottom: 1px solid rgba(196, 207, 219, 0.1);
}
.setting-item:last-child {
border-bottom: none;
}
.setting-label {
color: rgba(224, 224, 224, 0.9);
font-size: 0.95em;
}
.setting-value {
color: rgb(254, 105, 118);
font-family: 'Courier New', monospace;
font-weight: 500;
}
.setting-item.highlight {
display: flex;
justify-content: center;
padding: 15px 0;
}
.setting-item.highlight .setting-value {
font-size: 1.2em;
color: rgb(254, 105, 118);
}
/* Model list */
.model-list {
list-style: none;
padding: 0;
margin: 10px 0 0 0;
}
.model-list li {
color: #E0E0E0;
font-family: 'Courier New', monospace;
padding: 8px 0 8px 20px;
position: relative;
}
.model-list li::before {
content: '';
position: absolute;
left: 0;
top: 50%;
width: 8px;
height: 2px;
background: rgb(196, 207, 219);
transform: translateY(-50%) skewX(-20deg);
}
/* Container */
.container {
max-width: 1200px;
margin: 0 auto;
padding: 40px 20px;
position: relative;
}
.container::after {
content: '';
position: absolute;
top: 0;
right: 0;
width: 200px;
height: 200px;
background: radial-gradient(circle at top right, rgba(196, 207, 219, 0.1), transparent 70%);
pointer-events: none;
z-index: 0;
}
/* Header */
.header {
text-align: center;
margin-bottom: 40px;
position: relative;
padding: 20px;
background: rgba(26, 26, 26, 0.98);
border: 1px solid rgb(196, 207, 219);
clip-path: polygon(0 0, calc(100% - 20px) 0, 100% 20px, 100% 100%, 20px 100%, 0 calc(100% - 20px));
box-shadow: 0 0 30px rgba(196, 207, 219, 0.1);
}
.header h1 {
color: rgb(196, 207, 219);
text-shadow:
0 0 10px rgba(254, 105, 118, 0.3),
0 0 20px rgba(254, 105, 118, 0.2),
0 0 30px rgba(254, 105, 118, 0.1);
letter-spacing: 3px;
font-size: 2.5em;
font-weight: 700;
text-transform: uppercase;
}
.header::after {
content: '';
position: absolute;
bottom: 15px;
left: 50%;
transform: translateX(-50%);
width: 200px;
height: 2px;
background: linear-gradient(90deg,
transparent,
rgb(254, 105, 118) 20%,
rgb(254, 105, 118) 80%,
transparent
);
box-shadow: 0 0 10px rgba(254, 105, 118, 0.3);
}
/* Info section */
.info {
display: grid;
gap: 30px;
position: relative;
}
.info::before {
content: '';
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
background:
linear-gradient(45deg, transparent 48%, rgba(196, 207, 219, 0.05) 49%, rgba(196, 207, 219, 0.05) 51%, transparent 52%) 0 0/20px 20px;
pointer-events: none;
z-index: -1;
}
/* Banner image */
.info img {
width: 100%;
height: auto;
border: 2px solid rgb(196, 207, 219);
position: relative;
clip-path: polygon(0 0, calc(100% - 20px) 0, 100% 20px, 100% 100%, 20px 100%, 0 calc(100% - 20px));
box-shadow:
0 0 30px rgba(196, 207, 219, 0.2),
0 0 60px rgba(196, 207, 219, 0.1);
filter: contrast(1.1) brightness(1.05);
}
.info img:hover {
box-shadow:
0 0 40px rgba(254, 105, 118, 0.2),
0 0 80px rgba(254, 105, 118, 0.1);
transition: all 0.3s ease;
}
/* Creator section */
.creator-section {
display: flex;
justify-content: flex-end;
margin: -20px 0 20px;
position: relative;
z-index: 1;
}
.creator-section::before {
content: '';
position: absolute;
top: 50%;
right: 0;
width: 50%;
height: 1px;
background: linear-gradient(90deg, transparent, rgba(196, 207, 219, 0.2));
transform: translateY(-50%);
z-index: -1;
}
.creator-badge {
background: rgba(26, 26, 26, 0.95);
border: 1px solid rgb(196, 207, 219);
padding: 8px 15px;
display: flex;
align-items: center;
gap: 10px;
position: relative;
overflow: hidden;
clip-path: polygon(0 0, calc(100% - 10px) 0, 100% 10px, 100% 100%, 10px 100%, 0 calc(100% - 10px));
box-shadow: 0 0 15px rgba(196, 207, 219, 0.1);
}
.creator-badge::before {
content: '';
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
background: linear-gradient(45deg, transparent 48%, rgba(196, 207, 219, 0.05) 49%, rgba(196, 207, 219, 0.05) 51%, transparent 52%);
background-size: 10px 10px;
pointer-events: none;
}
.creator-label {
color: rgb(196, 207, 219);
font-size: 14px;
text-transform: uppercase;
letter-spacing: 1px;
text-shadow: 0 0 5px rgba(196, 207, 219, 0.2);
}
.creator-link {
display: flex;
align-items: center;
gap: 5px;
color: rgb(254, 105, 118);
font-weight: 500;
}
.creator-name {
position: relative;
}
.creator-arrow {
font-size: 18px;
line-height: 1;
transform: translateY(1px);
}
/* Details element styling */
details {
margin: 15px 0;
}
summary {
cursor: pointer;
color: rgb(196, 207, 219);
font-weight: 500;
margin-bottom: 15px;
position: relative;
padding-left: 20px;
}
summary::before {
content: '';
position: absolute;
left: 0;
top: 50%;
width: 8px;
height: 2px;
background: rgb(254, 105, 118);
transform: translateY(-50%) skewX(-20deg);
}
summary::marker,
summary::-webkit-details-marker {
display: none;
}
/* Special Thanks Section */
.special-thanks {
background: rgba(26, 26, 26, 0.95);
border: 1px solid rgb(196, 207, 219);
padding: 20px;
margin: 20px 0;
position: relative;
clip-path: polygon(0 0, calc(100% - 15px) 0, 100% 15px, 100% 100%, 15px 100%, 0 calc(100% - 15px));
}
.special-thanks h3 {
color: rgb(196, 207, 219);
margin-bottom: 15px;
position: relative;
padding-left: 20px;
}
.special-thanks h3::before {
content: '';
position: absolute;
left: 0;
top: 50%;
width: 8px;
height: 2px;
background: rgb(254, 105, 118);
transform: translateY(-50%) skewX(-20deg);
}
.thanks-list {
list-style: none;
padding: 0;
margin: 0;
display: grid;
gap: 10px;
}
.thanks-list li {
color: rgb(196, 207, 219);
padding-left: 15px;
position: relative;
}
.thanks-list li strong {
color: rgb(254, 105, 118);
font-weight: 500;
}
.thanks-list li::before {
content: '';
position: absolute;
left: 0;
top: 50%;
width: 6px;
height: 1px;
background: rgba(196, 207, 219, 0.3);
transform: translateY(-50%) skewX(-20deg);
}
.thanks-note {
margin-top: 15px;
color: rgba(196, 207, 219, 0.7);
font-style: italic;
font-size: 0.9em;
}
/* Responsive adjustments */
@media (max-width: 768px) {
.container {
padding: 20px;
}
.core-metrics-grid,
.info-grid {
grid-template-columns: 1fr;
}
.creator-section {
justify-content: flex-start;
}
}
/* Metrics section */
.metrics-section {
margin-bottom: 30px;
position: relative;
background: rgba(32, 32, 32, 0.95);
border: 1px solid rgb(196, 207, 219);
padding: 20px;
clip-path: polygon(0 0, calc(100% - 15px) 0, 100% 15px, 100% 100%, 15px 100%, 0 calc(100% - 15px));
box-shadow:
0 0 20px rgba(196, 207, 219, 0.1),
0 0 40px rgba(196, 207, 219, 0.05);
}
/* Core metrics grid */
.core-metrics-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
gap: 15px;
margin-bottom: 30px;
}
.info-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(150px, 1fr));
gap: 15px;
}
/* Metric box */
.metric-box {
background: rgba(32, 32, 32, 0.95);
border: 1px solid rgb(196, 207, 219);
border-radius: 0;
padding: 15px;
display: flex;
flex-direction: column;
gap: 8px;
position: relative;
overflow: hidden;
clip-path: polygon(0 0, calc(100% - 10px) 0, 100% 10px, 100% 100%, 10px 100%, 0 calc(100% - 10px));
box-shadow:
0 0 15px rgba(196, 207, 219, 0.1),
0 0 30px rgba(196, 207, 219, 0.05);
}
.metric-box .label {
color: rgb(196, 207, 219);
font-size: 14px;
font-weight: 500;
text-transform: uppercase;
letter-spacing: 1px;
text-shadow: 0 0 5px rgba(196, 207, 219, 0.2);
}
.metric-box .value {
color: rgb(254, 105, 118);
font-size: 28px;
font-weight: 700;
text-shadow:
0 0 10px rgba(254, 105, 118, 0.3),
0 0 20px rgba(254, 105, 118, 0.2);
letter-spacing: 1px;
}
/* Progress metrics */
.progress-metrics {
display: grid;
gap: 15px;
padding: 20px;
background: rgba(32, 32, 32, 0.95);
border: 1px solid rgb(196, 207, 219);
position: relative;
overflow: hidden;
clip-path: polygon(0 0, calc(100% - 15px) 0, 100% 15px, 100% 100%, 15px 100%, 0 calc(100% - 15px));
box-shadow:
0 0 20px rgba(196, 207, 219, 0.1),
0 0 40px rgba(196, 207, 219, 0.05);
}
.progress-metric {
display: grid;
gap: 8px;
}
.progress-label {
display: flex;
justify-content: space-between;
align-items: center;
color: rgb(196, 207, 219);
font-size: 14px;
text-transform: uppercase;
letter-spacing: 1px;
text-shadow: 0 0 5px rgba(196, 207, 219, 0.2);
}
.progress-value {
color: rgb(254, 105, 118);
font-weight: 600;
text-shadow:
0 0 5px rgba(254, 105, 118, 0.3),
0 0 10px rgba(254, 105, 118, 0.2);
}
/* Progress bars */
.progress-bar {
height: 4px;
background: rgba(196, 207, 219, 0.1);
border-radius: 0;
overflow: hidden;
position: relative;
border: 1px solid rgba(196, 207, 219, 0.2);
clip-path: polygon(0 0, 100% 0, calc(100% - 4px) 100%, 0 100%);
}
.progress-fill {
height: 100%;
background: linear-gradient(90deg, rgb(254, 105, 118), rgb(254, 125, 138));
border-radius: 0;
position: relative;
overflow: hidden;
clip-path: polygon(0 0, calc(100% - 4px) 0, 100% 100%, 0 100%);
box-shadow:
0 0 10px rgba(254, 105, 118, 0.2),
0 0 20px rgba(254, 105, 118, 0.1);
}
.progress-fill::after {
content: '';
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
background: linear-gradient(90deg,
rgba(255, 255, 255, 0.1) 0%,
rgba(255, 255, 255, 0.1) 40%,
rgba(255, 255, 255, 0.3) 50%,
rgba(255, 255, 255, 0.1) 60%,
rgba(255, 255, 255, 0.1) 100%
);
background-size: 300% 100%;
animation: shimmer 2s infinite;
}
/* Split progress bars */
.progress-metric.split .progress-label {
justify-content: space-between;
font-size: 13px;
}
.progress-bar.split {
display: flex;
background: rgba(196, 207, 219, 0.1);
position: relative;
justify-content: center;
border: 1px solid rgba(196, 207, 219, 0.2);
clip-path: polygon(0 0, 100% 0, calc(100% - 4px) 100%, 0 100%);
}
.progress-bar.split::after {
content: '';
position: absolute;
top: 0;
left: 50%;
transform: translateX(-50%);
width: 2px;
height: 100%;
background: rgba(196, 207, 219, 0.3);
z-index: 2;
box-shadow: 0 0 10px rgba(196, 207, 219, 0.2);
}
.progress-fill-left,
.progress-fill-right {
height: 100%;
background: linear-gradient(90deg, rgb(254, 105, 118), rgb(254, 125, 138));
position: relative;
width: 50%;
overflow: hidden;
}
.progress-fill-left {
clip-path: polygon(0 0, calc(100% - 4px) 0, 100% 100%, 0 100%);
margin-right: 1px;
transform-origin: right;
transform: scaleX(var(--scale, 0));
box-shadow:
0 0 10px rgba(254, 105, 118, 0.2),
0 0 20px rgba(254, 105, 118, 0.1);
}
.progress-fill-right {
clip-path: polygon(0 0, 100% 0, 100% 100%, 4px 100%);
margin-left: 1px;
transform-origin: left;
transform: scaleX(var(--scale, 0));
box-shadow:
0 0 10px rgba(254, 105, 118, 0.2),
0 0 20px rgba(254, 105, 118, 0.1);
}
/* Benchmark container */
.benchmark-container {
background: rgba(32, 32, 32, 0.95);
border: 1px solid rgb(196, 207, 219);
position: relative;
overflow: hidden;
clip-path: polygon(0 0, calc(100% - 15px) 0, 100% 15px, 100% 100%, 15px 100%, 0 calc(100% - 15px));
box-shadow:
0 0 20px rgba(196, 207, 219, 0.1),
0 0 40px rgba(196, 207, 219, 0.05);
padding: 20px;
}
/* Benchmark notification */
.benchmark-notification {
background: rgba(32, 32, 32, 0.95);
border: 1px solid rgb(196, 207, 219);
padding: 15px;
margin-bottom: 20px;
position: relative;
overflow: hidden;
clip-path: polygon(0 0, calc(100% - 10px) 0, 100% 10px, 100% 100%, 10px 100%, 0 calc(100% - 10px));
box-shadow:
0 0 15px rgba(196, 207, 219, 0.1),
0 0 30px rgba(196, 207, 219, 0.05);
}
.notification-content {
display: flex;
align-items: center;
gap: 10px;
position: relative;
z-index: 1;
}
.notification-icon {
font-size: 20px;
color: rgb(254, 105, 118);
text-shadow:
0 0 10px rgba(254, 105, 118, 0.3),
0 0 20px rgba(254, 105, 118, 0.2);
}
.notification-text {
color: rgb(196, 207, 219);
font-size: 14px;
display: flex;
align-items: center;
gap: 10px;
flex-wrap: wrap;
text-transform: uppercase;
letter-spacing: 1px;
text-shadow: 0 0 5px rgba(196, 207, 219, 0.2);
}
.benchmark-link {
color: rgb(254, 105, 118);
font-weight: 500;
white-space: nowrap;
text-shadow:
0 0 5px rgba(254, 105, 118, 0.3),
0 0 10px rgba(254, 105, 118, 0.2);
position: relative;
padding: 2px 5px;
border: 1px solid rgba(196, 207, 219, 0.2);
clip-path: polygon(0 0, calc(100% - 5px) 0, 100% 5px, 100% 100%, 5px 100%, 0 calc(100% - 5px));
}
@keyframes shimmer {
0% { background-position: 200% 0; }
100% { background-position: -200% 0; }
}
</style>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>L3.3-San-Mai-R1-70b</title>
<link href="https://fonts.googleapis.com/css2?family=Quicksand:wght@400;500;600&display=swap" rel="stylesheet">
<link href="styles/components/layout.css" rel="stylesheet">
<link href="styles/components/metrics.css" rel="stylesheet">
<link href="styles/components/cards.css" rel="stylesheet">
<link href="styles/components/buttons.css" rel="stylesheet">
<link href="styles/components/animations.css" rel="stylesheet">
<link href="styles/main.css" rel="stylesheet">
</head>
<body>
<div class="container">
<div class="header">
<h1>L3.3-San-Mai-R1-70b</h1>
</div>
<div class="info">
<img src="https://cdn-uploads.huggingface.co/production/uploads/64545af5ec40bbbd01242ca6/8fZQZaLM0XO9TyKh-yMQ7.jpeg" alt="Model banner">
<div class="creator-section">
<div class="creator-badge" style="display: flex; align-items: center; gap: 1rem;">
<div class="creator-info">
<span class="creator-label">Created by</span>
<a href="https://huggingface.co/Steelskull" target="_blank" class="creator-link">
<span class="creator-name">SteelSkull</span>
<span class="creator-arrow">→</span>
</a>
</div>
<a href="https://ko-fi.com/Y8Y0AO2XE" target="_blank" class="button" style="margin: 0; padding: 0.5rem 1rem;">
Support on Ko-fi
</a>
</div>
</div>
<div class="model-info">
<h2>Model Information</h2>
<div class="info-card">
<div class="info-header">
<h3>L3.3-San-Mai-R1-70b v0.5.OG</h3>
<div class="model-tags">
<span class="model-tag">L3.3 = Llama 3.3</span>
<span class="model-tag">SCE Merge</span>
<span class="model-tag">R1 = Deepseek R1</span>
<span class="model-tag">70b Parameters</span>
<span class="model-tag">v0.5.OG</span>
</div>
</div>
<div class="model-composition">
<h4>Model Composition</h4>
<ul class="composition-list">
<li><span class="model-component base-model"><a href="https://huggingface.co/TheSkullery/L3.1x3.3-DS-Hydroblated-R1-70B-v4.1" target="_blank">L3.1x3.3-DS-Hydroblated-R1-70B-v4.1</a></span> Base model</li>
<li><span class="model-component"><a href="https://huggingface.co/EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0" target="_blank">EVA-LLaMA-3.33-70B-v0.0</a></span> Core capabilities</li>
<li><span class="model-component"><a href="https://huggingface.co/Sao10K/L3.3-70B-Euryale-v2.3" target="_blank">L3.3-70B-Euryale-v2.3</a></span> Enhanced reasoning</li>
<li><span class="model-component"><a href="https://huggingface.co/Sao10K/70B-L3.3-Cirrus-x1" target="_blank">70B-L3.3-Cirrus-x1</a></span> Improved coherence</li>
<li><span class="model-component"><a href="https://huggingface.co/Sao10K/L3.1-70B-Hanami-x1" target="_blank">L3.1-70B-Hanami-x1</a></span> Balanced responses</li>
<li><span class="model-component"><a href="https://huggingface.co/TheDrummer/Anubis-70B-v1" target="_blank">Anubis-70B-v1</a></span> Enhanced detail</li>
<li><span class="model-component"><a href="https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B" target="_blank">Negative_LLAMA_70B</a></span> Reduced bias</li>
</ul>
<p></p>
<div class="model-description">
<h4>Model Series Overview</h4>
<p>L3.3-San-Mai-R1-70b represents the foundational release in a three-part model series, followed by L3.3-Cu-Mai-R1-70b (Version A) and L3.3-Mokume-Gane-R1-70b (Version C). The name "San-Mai" draws inspiration from the Japanese bladesmithing technique of creating three-layer laminated composite metals, known for combining a hard cutting edge with a tougher spine - a metaphor for this model's balanced approach to AI capabilities.</p>
<h4>Technical Architecture</h4>
<p>Built on a custom DeepSeek R1 Distill base (DS-Hydroblated-R1-v4.1), San-Mai-R1 integrates specialized components through the SCE merge method:</p>
<ul>
<li>EVA and EURYALE foundations for creative expression and scene comprehension</li>
<li>Cirrus and Hanami elements for enhanced reasoning capabilities</li>
<li>Anubis components for detailed scene description</li>
<li>Negative_LLAMA integration for balanced perspective and response</li>
</ul>
<h4>Core Capabilities</h4>
<p>As the OG model in the series, San-Mai-R1 serves as the gold standard and reliable baseline. User feedback consistently highlights its superior intelligence, coherence, and unique ability to provide deep character insights. Through proper prompting, the model demonstrates advanced reasoning capabilities and an "X-factor" that enables unprompted exploration of character inner thoughts and motivations.</p>
<h4>Base Architecture</h4>
<p>The model utilizes the custom Hydroblated-R1 base, engineered for stability and enhanced reasoning. The SCE merge method's settings are precisely tuned based on extensive community feedback, ensuring optimal component integration while maintaining model coherence and reliability. This foundation establishes San-Mai-R1 as the benchmark upon which its variant models build and expand.</p>
</div>
</div>
</div>
<h2>UGI-Benchmark Results:</h2>
<div class="benchmark-container">
<div class="benchmark-notification">
<div class="notification-content">
<span class="notification-icon">🏆</span>
<span class="notification-text">
Latest benchmark results as of 02/20/2025.
<a href="https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard" target="_blank" class="benchmark-link">
View Full Leaderboard →
</a>
</span>
</div>
</div>
<div class="metrics-section">
<h3>Core Metrics</h3>
<div class="core-metrics-grid">
<div class="metric-box">
<span class="label">UGI Score</span>
<span class="value">40.04</span>
</div>
<div class="metric-box">
<span class="label">Willingness Score</span>
<span class="value">2.5/10</span>
</div>
<div class="metric-box">
<span class="label">Natural Intelligence</span>
<span class="value">42.36</span>
</div>
<div class="metric-box">
<span class="label">Coding Ability</span>
<span class="value">22</span>
</div>
</div>
</div>
<div class="metrics-section">
<h3>Model Information</h3>
<div class="info-grid">
<div class="metric-box">
<span class="label">Political Lean</span>
<span class="value">-8.5%</span>
</div>
<div class="metric-box">
<span class="label">Ideology</span>
<span class="value">Liberalism</span>
</div>
<div class="metric-box">
<span class="label">Parameters</span>
<span class="value">70B</span>
</div>
</div>
</div>
<div class="metrics-section">
<details>
<summary>Aggregated Scores</summary>
<div class="progress-metrics">
<div class="progress-metric">
<div class="progress-label">
<span>Diplomacy</span>
<span class="progress-value">61.7%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 61.7%"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>Government</span>
<span class="progress-value">44.6%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 44.6%"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>Economy</span>
<span class="progress-value">43.3%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 43.3%"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>Society</span>
<span class="progress-value">60.0%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 60.0%"></div>
</div>
</div>
</div>
</details>
</div>
<div class="metrics-section">
<details>
<summary>Individual Scores</summary>
<div class="progress-metrics">
<div class="progress-metric split">
<div class="progress-label">
<span>Federal</span>
<span class="progress-value">46.0%</span>
<span>Unitary</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="--scale: 0.460"></div>
<div class="progress-fill-right" style="--scale: 0.540"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span>Democratic</span>
<span class="progress-value">67.5%</span>
<span>Autocratic</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="--scale: 0.675"></div>
<div class="progress-fill-right" style="--scale: 0.325"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span>Security</span>
<span class="progress-value">47.5%</span>
<span>Freedom</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="--scale: 0.475"></div>
<div class="progress-fill-right" style="--scale: 0.525"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span>Nationalism</span>
<span class="progress-value">40.4%</span>
<span>Int'l</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="--scale: 0.404"></div>
<div class="progress-fill-right" style="--scale: 0.596"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span>Militarist</span>
<span class="progress-value">32.9%</span>
<span>Pacifist</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="--scale: 0.329"></div>
<div class="progress-fill-right" style="--scale: 0.671"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span>Assimilationist</span>
<span class="progress-value">41.5%</span>
<span>Multiculturalist</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="--scale: 0.415"></div>
<div class="progress-fill-right" style="--scale: 0.585"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span>Collectivize</span>
<span class="progress-value">43.3%</span>
<span>Privatize</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="--scale: 0.433"></div>
<div class="progress-fill-right" style="--scale: 0.567"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span>Planned</span>
<span class="progress-value">42.9%</span>
<span>LaissezFaire</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="--scale: 0.429"></div>
<div class="progress-fill-right" style="--scale: 0.571"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span>Isolationism</span>
<span class="progress-value">43.8%</span>
<span>Globalism</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="--scale: 0.438"></div>
<div class="progress-fill-right" style="--scale: 0.562"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span>Irreligious</span>
<span class="progress-value">57.9%</span>
<span>Religious</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="--scale: 0.579"></div>
<div class="progress-fill-right" style="--scale: 0.421"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span>Progressive</span>
<span class="progress-value">57.3%</span>
<span>Traditional</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="--scale: 0.573"></div>
<div class="progress-fill-right" style="--scale: 0.427"></div>
</div>
</div>
<div class="progress-metric split">
<div class="progress-label">
<span>Acceleration</span>
<span class="progress-value">64.8%</span>
<span>Bioconservative</span>
</div>
<div class="progress-bar split">
<div class="progress-fill-left" style="--scale: 0.648"></div>
<div class="progress-fill-right" style="--scale: 0.352"></div>
</div>
</div>
</div>
</details>
</div>
</div>
<!-- Open LLM-Benchmark Results - TO BE UPDATED -->
<!--<h2>Open LLM-Benchmark Results:</h2>
<div class="benchmark-container">
<div class="benchmark-notification">
<div class="notification-content">
<span class="notification-text">
Average Score: 43.68%
<a href="https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?rankingMode=dynamic" target="_blank" class="benchmark-link">
View Full Leaderboard →
</a>
</span>
</div>
</div>
<div class="progress-metrics">
<div class="progress-metric">
<div class="progress-label">
<span>IFEval</span>
<span class="progress-value">60.24%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 60.24%"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>BBH</span>
<span class="progress-value">56.17%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 56.17%"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>MATH</span>
<span class="progress-value">46.68%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 46.68%"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>GPQA</span>
<span class="progress-value">29.19%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 29.19%"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>MUSR</span>
<span class="progress-value">20.19%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 20.19%"></div>
</div>
</div>
<div class="progress-metric">
<div class="progress-label">
<span>MMLU-Pro</span>
<span class="progress-value">49.59%</span>
</div>
<div class="progress-bar">
<div class="progress-fill" style="width: 49.59%"></div>
</div>
</div>
</div>
</div>-->
<div class="component-section" id="settings">
<div class="section-container">
<h2>Recommended Sampler Settings: <strong> By @Geechan</strong></h2>
<div class="settings-grid">
<div class="settings-card">
<div class="settings-header">
<h3>Static Temperature:</h3>
</div>
<div class="settings-content">
<div class="setting-item highlight">
<span class="setting-value">1 - 1.05</span>
</div>
</div>
</div>
<div class="settings-card">
<div class="settings-header">
<h3>Min P</h3>
</div>
<div class="settings-content">
<div class="setting-item highlight">
<span class="setting-value">0.015</span>
</div>
</div>
</div>
<div class="settings-card">
<div class="settings-header">
<h3>DRY Settings: (optional)</h3>
</div>
<div class="settings-content">
<div class="setting-item">
<span class="setting-label">Multiplier</span>
<span class="setting-value">0.8</span>
</div>
<div class="setting-item">
<span class="setting-label">Base</span>
<span class="setting-value">1.75</span>
</div>
<div class="setting-item">
<span class="setting-label">Length</span>
<span class="setting-value">4</span>
</div>
</div>
</div>
</div>
</div>
</div>
<div class="section-container">
<h2>Recommended Templates & Prompts</h2>
<div class="template-card">
<div class="template-item">
<div class="template-content">
<a href="https://huggingface.co/Konnect1221/Methception-Llamaception-SillyTavern-Preset" target="_blank" class="template-link">
LLam@ception
<span class="link-arrow">→</span>
</a>
<span class="template-author">by @.konnect</span>
</div>
</div>
<div class="template-item">
<div class="template-content">
<a href="https://huggingface.co/Steelskull/L3.3-San-Mai-R1-70b/blob/main/LeCeption-XML-V2-Thinking.json" target="_blank" class="template-link">
LeCeption
<span class="link-arrow">→</span>
</a>
<span class="template-author">by @Steel</span> > A completly revamped XML version of Llam@ception 1.5.2 with stepped thinking and Reasoning added
</div>
</div>
</div>
<div class="settings-card">
<div class="settings-header">
<h3>LECEPTION REASONING CONFIGURATION:</h3>
</div>
<div class="settings-content">
<div class="settings-grid">
<div class="settings-card">
<div class="settings-header">
<h3>Start Reply With:</h3>
</div>
<div class="settings-content">
<div class="setting-item">
<p>'<span style="color: #ff6b6b"><think></span> OK, as an objective, detached narrative analyst, let's think this through carefully:'</p>
</div>
</div>
</div>
<div class="settings-card">
<div class="settings-header">
<h3>Reasoning Formatting (no spaces):</h3>
</div>
<div class="settings-content">
<div class="setting-item">
<span class="setting-label">Prefix:</span>
<span class="setting-value">'<span style="color: #ff6b6b"><think></span>'</span>
</div>
<div class="setting-item">
<span class="setting-label">Suffix:</span>
<span class="setting-value">'<span style="color: #ff6b6b"></think></span>'</span>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<!--<div class="section-container">
<h2>Quantized Versions</h2>
<div class="quantized-container">
<div class="quantized-section">
<h3>GGUF Quantizations</h3>
<div class="quantized-items">
<div class="quantized-item">
<span class="author">bartowski</span>
<a href="https://huggingface.co/bartowski/Steelskull_L3.3-San-Mai-R1-GGUF" target="_blank">
Combined-GGUF
<span class="link-arrow">→</span>
</a>
</div>
<div class="quantized-item">
<span class="author">mradermacher</span>
<div class="multi-links">
<a href="https://huggingface.co/mradermacher/L3.3-San-Mai-R1-GGUF" target="_blank">
GGUF
<span class="link-arrow">→</span>
</a>
<span class="separator">//</span>
<a href="https://huggingface.co/mradermacher/L3.3-San-Mai-R1-i1-GGUF" target="_blank">
Imat-GGUF
<span class="link-arrow">→</span>
</a>
</div>
</div>
</div>
</div>
<div class="quantized-section">
<h3>EXL2 Quantizations</h3>
<div class="quantized-items">
<div class="quantized-item">
<span class="author">ReadyArt</span>
<div class="multi-links">
<a href="https://huggingface.co/ReadyArt/L3.3-San-Mai-R1_EXl2_8.0bpw_H8" target="_blank">
8.0BPW-EXL2
<span class="link-arrow">→</span>
</a>
<span class="separator">//</span>
<a href="https://huggingface.co/ReadyArt/L3.3-San-Mai-R1_EXl2_6.65bpw_H8" target="_blank">
6.65BPW-EXL2
<span class="link-arrow">→</span>
</a>
</div>
</div>
<div class="quantized-item">
<span class="author">Darkhn</span>
<a href="https://huggingface.co/Darkhn/Steelskull-L3.3-San-Mai-R1-6.0bpw-h8-exl2" target="_blank">
6.0BPW-EXL2
<span class="link-arrow">→</span>
</a>
</div>
</div>
</div>
<div class="quantized-section">
<h3>FP8 Dynamic</h3>
<div class="quantized-items">
<div class="quantized-item">
<span class="author">yeyaowei</span>
<a href="https://huggingface.co/yeyaowei/L3.3-San-Mai-R1-FP8-Dynamic" target="_blank">
FP8-Dynamic
<span class="link-arrow">→</span>
</a>
</div>
</div>
</div>
</div>
</div>-->
<div class="support-section">
<h2>Support & Community:</h2>
<div class="support-buttons">
<a href="https://ko-fi.com/Y8Y0AO2XE" target="_blank" class="button">
Support on Ko-fi
</a>
<a href="https://discord.gg/4tCngSm3qZ" target="_blank" class="button">
Join Discord
</a>
</div>
<div class="special-thanks">
<h3>Special Thanks</h3>
<ul class="thanks-list">
<li><strong>@Geechan</strong> for feedback and sampler settings</li>
<li><strong>@Konnect</strong> for their feedback and templates</li>
<li><strong>@Kistara</strong> for their feedback and help with the model mascot design</li>
<li><strong>@Thana Alt</strong> for their feedback and Quants</li>
<li><strong>@Lightning_missile</strong> for their feedback</li>
<li><strong>The Arli community</strong> for feedback and testers</li>
<li><strong>The BeaverAI communty</strong> for feedback and testers</li>
</ul>
<p class="thanks-note">I wish I could add everyone but im pretty sure it would be as long as the card!</p>
</div>
</div>
</div>
</div>
</body>
</html>
|
Domenico316/Ronzio316 | Domenico316 | 2025-03-04T13:22:38Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-03-04T13:22:38Z | ---
license: apache-2.0
---
|
KoDer123/NerealnostQA2025 | KoDer123 | 2025-03-04T13:20:00Z | 46 | 0 | null | [
"safetensors",
"gguf",
"qwen2",
"unsloth",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-03T08:26:19Z | ---
license: apache-2.0
tags:
- unsloth
---
|
klovuniha/cherakshin_style_LoRA | klovuniha | 2025-03-04T13:16:04Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2025-03-04T13:14:04Z | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: photo collage in CHERKASHIN style
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - klovuniha/cherakshin_style_LoRA
<Gallery />
## Model description
These are klovuniha/cherakshin_style_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use photo collage in CHERKASHIN style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](klovuniha/cherakshin_style_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
shibajustfor/514476e6-c016-4442-aad1-f6ac80227c36 | shibajustfor | 2025-03-04T13:15:47Z | 0 | 0 | peft | [
"peft",
"generated_from_trainer",
"base_model:unsloth/Qwen2-7B",
"base_model:adapter:unsloth/Qwen2-7B",
"region:us"
] | null | 2025-03-04T13:15:32Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: unsloth/Qwen2-7B
model-index:
- name: shibajustfor/514476e6-c016-4442-aad1-f6ac80227c36
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# shibajustfor/514476e6-c016-4442-aad1-f6ac80227c36
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8035
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
SaisExperiments/Mistral-Small-24b-Sertraline-0304-Q6_K-GGUF | SaisExperiments | 2025-03-04T13:15:18Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:allura-org/Mistral-Small-24b-Sertraline-0304",
"base_model:quantized:allura-org/Mistral-Small-24b-Sertraline-0304",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-04T13:13:47Z | ---
base_model: estrogen/ms24b-realsies-inkstruct-ep2-ckpt
tags:
- llama-cpp
- gguf-my-repo
---
# SaisExperiments/ms24b-realsies-inkstruct-ep2-ckpt-Q6_K-GGUF
This model was converted to GGUF format from [`estrogen/ms24b-realsies-inkstruct-ep2-ckpt`](https://huggingface.co/estrogen/ms24b-realsies-inkstruct-ep2-ckpt) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/estrogen/ms24b-realsies-inkstruct-ep2-ckpt) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo SaisExperiments/ms24b-realsies-inkstruct-ep2-ckpt-Q6_K-GGUF --hf-file ms24b-realsies-inkstruct-ep2-ckpt-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo SaisExperiments/ms24b-realsies-inkstruct-ep2-ckpt-Q6_K-GGUF --hf-file ms24b-realsies-inkstruct-ep2-ckpt-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo SaisExperiments/ms24b-realsies-inkstruct-ep2-ckpt-Q6_K-GGUF --hf-file ms24b-realsies-inkstruct-ep2-ckpt-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo SaisExperiments/ms24b-realsies-inkstruct-ep2-ckpt-Q6_K-GGUF --hf-file ms24b-realsies-inkstruct-ep2-ckpt-q6_k.gguf -c 2048
```
|
valerielucro/Qwen2-0.5B-GRPO-VLLM-mni-epoch-16-peft-merged | valerielucro | 2025-03-04T13:15:14Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-04T13:14:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Bedru/w2v-bert-2.0-Amharic | Bedru | 2025-03-04T13:14:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2-bert",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2025-03-02T12:08:43Z | ---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: w2v-bert-2.0-Amharic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# w2v-bert-2.0-Amharic
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 150
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 0.0 | 0.992 | 62 | nan | 1.0 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
irishprancer/f7b69c90-d25c-43f4-8c3e-25dc55005304 | irishprancer | 2025-03-04T13:14:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-04T08:23:47Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hanzla/mamba-finetuned-s1 | hanzla | 2025-03-04T13:14:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:tiiuae/falcon-mamba-7b-instruct",
"base_model:finetune:tiiuae/falcon-mamba-7b-instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-03-04T13:13:55Z | ---
base_model: tiiuae/falcon-mamba-7b-instruct
library_name: transformers
model_name: mamba-finetuned-s1
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for mamba-finetuned-s1
This model is a fine-tuned version of [tiiuae/falcon-mamba-7b-instruct](https://huggingface.co/tiiuae/falcon-mamba-7b-instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hanzla/mamba-finetuned-s1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hanzla403/falcon-mamba-finetune-s1/runs/cgr2diwt)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
uriel353/the-pose-prone-with-feet-up | uriel353 | 2025-03-04T13:12:47Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2025-03-04T13:08:24Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
The image is a high-resolution photograph taken outdoors on a sunny day. It
features a young Caucasian woman with long, straight platinum blonde hair
lying on her stomach on a blue and white striped towel. She is wearing a
black bikini with a minimalistic design, which accentuates her fit physique
and prominent buttocks. Her skin is fair and smooth, and she has a natural,
slightly tanned complexion. Her eyes are a striking green, and she has full,
glossy lips with a subtle nude lipstick. The background reveals a
well-maintained garden with lush green grass, a few trees, and a stone
pathway. To the left, there is a white house with a gray roof and a brick
patio area. The sky is clear with a few scattered clouds, suggesting a
pleasant weather day. The overall setting suggests a private, relaxed, and
possibly luxurious backyard environment. The woman's relaxed pose, combined
with the serene background, conveys a sense of leisure and tranquility.
output:
url: images/47666035.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# the-pose-prone-with-feet-up
<Gallery />
## Model description
It's not my model. I just uploaded it here.
https://civitai.com/models/946258/the-pose-prone-with-feet-up
## Download model
Weights for this model are available in Safetensors format.
[Download](/uriel353/the-pose-prone-with-feet-up/tree/main) them in the Files & versions tab.
|
Legalaz/12_llambocm4_08_02 | Legalaz | 2025-03-04T13:12:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-04T13:05:42Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# top
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* /root/top2
* /root/top1
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /root/top2
parameters:
weight: 0.8338
- model: /root/top1
parameters:
weight: 0.0628
merge_method: linear
dtype: bfloat16
```
|
AventIQ-AI/wav2vec2-base_speech_emotion_recognition | AventIQ-AI | 2025-03-04T13:10:37Z | 0 | 1 | null | [
"safetensors",
"wav2vec2",
"region:us"
] | null | 2025-03-04T12:47:45Z | # Fine-Tuned Wav2Vec2 for Speech Emotion Recognition
# Model Details
```
Model Name: Fine-Tuned Wav2Vec2 for Speech Emotion Recognition
Base Model: facebook/wav2vec2-base
Dataset: narad/ravdess
Quantization: Available as an optional FP16 version for optimized inference
Training Device: CUDA (GPU)
```
# Dataset Information
```
Dataset Structure:
DatasetDict({
train: Dataset({
features: ['audio', 'text', 'labels', 'speaker_id', 'speaker_gender'],
num_rows: 1440
})
})
```
**Note:** Split manually into 80% train (1,152 examples) and 20% validation (288 examples) during training, as the original dataset provides only a single "train" split.
# Available Splits:
- **Train:** 1,152 examples (after 80/20 split)
- **Validation:** 288 examples (after 80/20 split)
- **Test:** Not provided; external audio used for testing
# Feature Representation:
- **audio:** Raw waveform (48kHz, resampled to 16kHz during preprocessing)
- **text:** Spoken sentence (e.g., "Dogs are sitting by the door")
- **labels:** Integer labels for emotions (0–7)
- **speaker_id:** Actor identifier (e.g., "9")
- **speaker_gender:** Gender of speaker (e.g., "male")
# Training Details
- **Number of Classes:** 8
- **Class Names:**
neutral, calm, happy, sad, angry, fearful, disgust, surprised
- **Training Process:**
Fine-tuned for 10 epochs (initially 3, revised to 10 for better convergence)
- **Learning rate:** 3e-5, with warmup steps (100) and weight decay (0.1)
-**Batch size:** 4 with gradient accumulation (effective batch size 8)
- **Dropout added (attention_dropout=0.1, hidden_dropout=0.1) for regularization**
- **Performance Metrics**
- **Epochs:** 10
- **Training Loss:** ~0.8
- **Validation Loss:** ~1.2
- **Accuracy:** ~0.65
- **F1 Score:** ~0.63
# Inference Example
```python
import torch
from transformers import Wav2Vec2ForSequenceClassification, Wav2Vec2Processor
import librosa
def load_model(model_path):
model = Wav2Vec2ForSequenceClassification.from_pretrained(model_path)
processor = Wav2Vec2Processor.from_pretrained(model_path)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
model.eval()
return model, processor, device
def predict_emotion(model_path, audio_path):
model, processor, device = load_model(model_path)
# Load and preprocess audio
audio, sr = librosa.load(audio_path, sr=16000)
inputs = processor(audio, sampling_rate=16000, return_tensors="pt", padding=True, max_length=160000, truncation=True)
input_values = inputs["input_values"].to(device)
# Inference
with torch.no_grad():
outputs = model(input_values)
logits = outputs.logits
predicted_label = torch.argmax(logits, dim=1).item()
probabilities = torch.softmax(logits, dim=1).squeeze().cpu().numpy()
emotions = ['neutral', 'calm', 'happy', 'sad', 'angry', 'fearful', 'disgust', 'surprised']
return emotions[predicted_label], {emotion: prob for emotion, prob in zip(emotions, probabilities)}
# Example usage
if __name__ == "__main__":
model_path = "path/to/wav2vec2-ravdess-emotion/final_model" # Update with your HF username/repo
audio_path = "path/to/audio.wav"
emotion, probs = predict_emotion(model_path, audio_path)
print(f"Predicted Emotion: {emotion}")
print("Probabilities:", probs)
```
# Quantization & Optimization
- **Quantization:** Optional FP16 version created using PyTorch’s .half() for faster inference with reduced memory footprint.
- **Optimized:** Suitable for deployment on GPU-enabled devices; FP16 version reduces model size by ~50%.
# Usage
- **Input:** Raw audio files (.wav) resampled to 16kHz
- **Output:** Predicted emotion label (one of 8 classes) with confidence probabilities
# Limitations
- **Generalization:** Trained on acted speech (RAVDESS), may underperform on spontaneous or noisy real-world audio.
- **Dataset Size:** Limited to 1,440 samples, potentially insufficient for robust emotion recognition across diverse conditions.
- **Accuracy:** Performance on external audio varies; retraining with augmentation or larger datasets may be needed.
# Future Improvements
- **Data Augmentation:** Incorporate noise, pitch shift, or speed changes to improve robustness.
- **Larger Dataset:** Combine with additional SER datasets (e.g., IEMOCAP, CREMA-D) for diversity.
- **Model Tuning:** Experiment with freezing lower layers or using a model pre-trained for SER (e.g., facebook/wav2vec2-large-robust). |
ISEGURA/mdeberta-v3-base-100-bioautex | ISEGURA | 2025-03-04T13:09:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-04T10:39:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF | mradermacher | 2025-03-04T13:08:51Z | 0 | 1 | transformers | [
"transformers",
"gguf",
"en",
"base_model:nkpz/Llama-3.1-8B-Instruct-Uncensored-DeLMAT",
"base_model:quantized:nkpz/Llama-3.1-8B-Instruct-Uncensored-DeLMAT",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-03-04T08:37:57Z | ---
base_model: nkpz/Llama-3.1-8B-Instruct-Uncensored-DeLMAT
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/nkpz/Llama-3.1-8B-Instruct-Uncensored-DeLMAT
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.1 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.8 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-Q4_1.gguf) | i1-Q4_1 | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-Instruct-Uncensored-DeLMAT-i1-GGUF/resolve/main/Llama-3.1-8B-Instruct-Uncensored-DeLMAT.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Liz2408/RetrainedMistral | Liz2408 | 2025-03-04T13:08:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-04T10:51:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sa16feb24/jmpers-p4-sa16feb24-dsr1-q1-5b | sa16feb24 | 2025-03-04T13:08:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-04T13:07:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Melvin56/ko-r1-7b-v2.0.3-GGUF | Melvin56 | 2025-03-04T13:07:22Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"ko",
"dataset:OLAIR/Open-R1-Ko-SFT-v2.0",
"base_model:OLAIR/ko-r1-7b-v2.0.3",
"base_model:quantized:OLAIR/ko-r1-7b-v2.0.3",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | 2025-03-04T10:07:20Z | ---
library_name: transformers
license: mit
datasets:
- OLAIR/Open-R1-Ko-SFT-v2.0
language:
- ko
base_model:
- OLAIR/ko-r1-7b-v2.0.3
pipeline_tag: text-generation
---
# Melvin56/ko-r1-7b-v2.0.3-GGUF
Original Model : [OLAIR/ko-r1-7b-v2.0.3](https://huggingface.co/OLAIR/ko-r1-7b-v2.0.3)
All quants are made using the imatrix.
| Model | Size (GB) |
|:-------------------------------------------------|:-------------:|
| Q2_K_S | 2.83 |
| Q2_K | 3.01 |
| Q3_K_M | 3.81 |
| Q3_K_L | 4.09 |
| Q4_K_M | 4.68 |
| Q5_K_M | 5.44 |
| Q6_K | 6.25 |
| Q8_0 | 8.1 |
| F16 | 15.2 |
| | CPU (AVX2) | CPU (ARM NEON) | Metal | cuBLAS | rocBLAS | SYCL | CLBlast | Vulkan | Kompute |
| :------------ | :---------: | :------------: | :---: | :----: | :-----: | :---: | :------: | :----: | :------: |
| K-quants | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ 🐢5 | ✅ 🐢5 | ❌ |
| I-quants | ✅ 🐢4 | ✅ 🐢4 | ✅ 🐢4 | ✅ | ✅ | Partial¹ | ❌ | ❌ | ❌ |
```
✅: feature works
🚫: feature does not work
❓: unknown, please contribute if you can test it youself
🐢: feature is slow
¹: IQ3_S and IQ1_S, see #5886
²: Only with -ngl 0
³: Inference is 50% slower
⁴: Slower than K-quants of comparable size
⁵: Slower than cuBLAS/rocBLAS on similar cards
⁶: Only q8_0 and iq4_nl
``` |
TMCogni/core_testing_again04032025 | TMCogni | 2025-03-04T13:06:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-03-04T13:05:49Z | ---
base_model: unsloth/llama-3.2-3b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** TMCogni
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Darkhn/Unnamed-Exp-70b-v0.6A-6.0bpw-h8-exl2 | Darkhn | 2025-03-04T13:05:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0",
"base_model:merge:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0",
"base_model:LatitudeGames/Wayfarer-Large-70B-Llama-3.3",
"base_model:merge:LatitudeGames/Wayfarer-Large-70B-Llama-3.3",
"base_model:Sao10K/70B-L3.3-Cirrus-x1",
"base_model:merge:Sao10K/70B-L3.3-Cirrus-x1",
"base_model:Sao10K/L3.1-70B-Hanami-x1",
"base_model:merge:Sao10K/L3.1-70B-Hanami-x1",
"base_model:Sao10K/L3.3-70B-Euryale-v2.3",
"base_model:merge:Sao10K/L3.3-70B-Euryale-v2.3",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:TheDrummer/Anubis-70B-v1",
"base_model:merge:TheDrummer/Anubis-70B-v1",
"base_model:TheSkullery/L3.1x3.3-Hydroblated-R1-70B-v4.4",
"base_model:merge:TheSkullery/L3.1x3.3-Hydroblated-R1-70B-v4.4",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"6-bit",
"exl2",
"region:us"
] | text-generation | 2025-03-04T11:53:04Z | ---
base_model:
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0
- Sao10K/L3.1-70B-Hanami-x1
- Sao10K/L3.3-70B-Euryale-v2.3
- LatitudeGames/Wayfarer-Large-70B-Llama-3.3
- TheDrummer/Anubis-70B-v1
- TheSkullery/L3.1x3.3-Hydroblated-R1-70B-v4.4
- SicariusSicariiStuff/Negative_LLAMA_70B
- Sao10K/70B-L3.3-Cirrus-x1
library_name: transformers
tags:
- mergekit
- merge
--- |
valerielucro/Qwen2-0.5B-GRPO-VLLM-mni-epoch-16-peft | valerielucro | 2025-03-04T13:05:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-0.5B",
"base_model:finetune:Qwen/Qwen2.5-0.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-03-04T13:05:35Z | ---
base_model: Qwen/Qwen2.5-0.5B
library_name: transformers
model_name: Qwen2-0.5B-GRPO-VLLM-mni-epoch-16-peft
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2-0.5B-GRPO-VLLM-mni-epoch-16-peft
This model is a fine-tuned version of [Qwen/Qwen2.5-0.5B](https://huggingface.co/Qwen/Qwen2.5-0.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="valerielucro/Qwen2-0.5B-GRPO-VLLM-mni-epoch-16-peft", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1+cu121
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
nice2mitya/a_525499535 | nice2mitya | 2025-03-04T13:05:20Z | 1 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-02-23T07:26:29Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
Thiraput01/PhayatunedBERT-v4-finetuned | Thiraput01 | 2025-03-04T13:05:12Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-04T12:36:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
helene-rousset/deepscaler_step1500_v9 | helene-rousset | 2025-03-04T13:04:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-04T13:02:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Genie-hub/mooktm | Genie-hub | 2025-03-04T13:04:47Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-03-04T12:51:53Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MOOKTM
---
# Mooktm
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MOOKTM` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Genie-hub/mooktm', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
texanrangee/0552f5e8-9e86-4e2c-bad0-1fd54b990f61 | texanrangee | 2025-03-04T12:58:41Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-04T11:50:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
travelgate/TEST_room_category-classifier | travelgate | 2025-03-04T12:58:18Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-03T15:22:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SergeyPugachevv/SmolLM2-FT-MyDataset | SergeyPugachevv | 2025-03-04T12:58:11Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"smol-course",
"module_1",
"trl",
"sft",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-04T12:57:22Z | ---
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
model_name: SmolLM2-FT-MyDataset
tags:
- generated_from_trainer
- smol-course
- module_1
- trl
- sft
licence: license
---
# Model Card for SmolLM2-FT-MyDataset
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="SergeyPugachevv/SmolLM2-FT-MyDataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/sergeypugachev96-wb/huggingface/runs/ymga9922)
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.48.3
- Pytorch: 2.5.1+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Hemant2001/speecht5_finetuned_voxpopuli_ro | Hemant2001 | 2025-03-04T12:57:48Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2025-03-04T12:56:46Z | ---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_ro
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_ro
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| 0.587 | 3.1221 | 100 | 0.4987 |
| 0.5147 | 6.2443 | 200 | 0.4616 |
| 0.4926 | 9.3664 | 300 | 0.4524 |
| 0.4711 | 12.4885 | 400 | 0.4428 |
| 0.4643 | 15.6107 | 500 | 0.4415 |
| 0.4537 | 18.7328 | 600 | 0.4396 |
| 0.446 | 21.8550 | 700 | 0.4379 |
| 0.4419 | 24.9771 | 800 | 0.4367 |
| 0.4412 | 28.1221 | 900 | 0.4338 |
| 0.4365 | 31.2443 | 1000 | 0.4358 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
myst72/Llama-3-8B_MIFT-en_Alldata_v3_QLoRA-PIFT-EnJa_manywords-1000_v0 | myst72 | 2025-03-04T12:56:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-04T12:50:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
irishprancer/c8f804a7-babb-4b5c-8efa-174cf227f493 | irishprancer | 2025-03-04T12:54:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-04T10:40:33Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
OumaymaELBIACH/Results_Biogpt | OumaymaELBIACH | 2025-03-04T12:53:16Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:microsoft/biogpt",
"base_model:finetune:microsoft/biogpt",
"endpoints_compatible",
"region:us"
] | null | 2025-03-04T12:53:09Z | ---
base_model: microsoft/biogpt
library_name: transformers
model_name: Results_Biogpt
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Results_Biogpt
This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="OumaymaELBIACH/Results_Biogpt", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.50.0.dev0
- Pytorch: 2.5.1+cu124
- Datasets: 3.3.2
- Tokenizers: 0.21.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
texanrangee/1c981ec0-b171-4f5d-8282-a3d01aa64a54 | texanrangee | 2025-03-04T12:52:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-04T09:19:21Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zedapevide/MIGUEL_PHOTO | zedapevide | 2025-03-04T12:52:15Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2025-03-04T12:07:54Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
JobseekrApp/jarvisV1 | JobseekrApp | 2025-03-04T12:51:10Z | 0 | 0 | peft | [
"peft",
"pytorch",
"tensorboard",
"safetensors",
"gguf",
"mistral",
"arxiv:1910.09700",
"base_model:unsloth/mistral-7b-v0.3-bnb-4bit",
"base_model:adapter:unsloth/mistral-7b-v0.3-bnb-4bit",
"endpoints_compatible",
"region:us"
] | null | 2025-02-27T14:06:40Z | ---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
Legalaz/09_llambocm4_07_39 | Legalaz | 2025-03-04T12:50:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-04T12:43:22Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# top
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* /root/top2
* /root/top1
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /root/top2
parameters:
weight: 0.8962
- model: /root/top1
parameters:
weight: 0.0628
merge_method: linear
dtype: bfloat16
```
|
Smogy/SMOGY-Ai-images-detector | Smogy | 2025-03-04T12:46:18Z | 73 | 0 | transformers | [
"transformers",
"safetensors",
"swin",
"image-classification",
"base_model:Organika/sdxl-detector",
"base_model:finetune:Organika/sdxl-detector",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-12-02T11:27:56Z | ---
license: cc-by-nc-4.0
base_model:
- Organika/sdxl-detector
library_name: transformers
tags:
- image-classification
---
# AI-image-detector
The purpose of this model is to classify images as AI generated or Real.
### Dataset
This model was created by fine-tuning the [Organika/sdxl-detector] on dataset of AI generated and real images from reddit, kaggle and real art from public domain with their text description.
Dataset was balanced to have similar number of real and generated images in each class (e.g. art, photos ...).
Art images from public domain were paired with generated equivalent created from their text descriptions with style transfer (sdxl with ip-adapter) from original piece.
The final dataset consisted of more than 50k images.
### Testing
The testing dataset consisted of 20% split of our base dataset and images outside the training domain from specific popular (as of 2024) image generation models.
Finetuning vastly improved performance over Organika/sdxl-detector during testing, especially on images created by newer models.
Test split evaluation
| Accuracy | Precision | Recall | F1 |
|:-------------:|:---------------:|:--------:|:--------:|
| 0.9818 | 0.9829 | 0.9810 | 0.9819 |
Out of domain evaluation
| Generative Model Family | Accuracy |
|:-------------:|:---------------:|
| DALL-E | 0.9076 |
| FluxAi | 0.8333 |
| Imagen | 0.7563 |
| StableDiffusion | 0.8754 |
### License
The data used to fine-tune this model was scraped from image dedicated subreddits, some of which may be copyrighted. For this reason, this model should be considered appropriate only for non-commercial use only. |
Thiraput01/PhayatunedBERT-v3-finetuned | Thiraput01 | 2025-03-04T12:45:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-04T11:37:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
raathnathan/Judas | raathnathan | 2025-03-04T12:44:49Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | 2025-03-04T12:24:45Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Judas
---
# Judas
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Judas` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('raathnathan/Judas', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
AventIQ-AI/drama_base_sentence_similarity | AventIQ-AI | 2025-03-04T12:44:05Z | 0 | 1 | null | [
"safetensors",
"llama",
"custom_code",
"region:us"
] | null | 2025-03-03T12:18:43Z |
# Model Details
# Model Description
This is a Sentence Transformer model fine-tuned from facebook/drama-base. It maps sentences and paragraphs to a 768-dimensional dense vector space and can be used for:
```
✅ Semantic Textual Similarity
✅ Semantic Search
✅ Paraphrase Mining
✅ Text Classification
✅ Clustering
```
**Model Type**: Sentence Transformer
**Base Model**: facebook/drama-base
**Maximum Sequence Length**: 512 tokens
**Output Dimensionality**: 768 dimensions
**Similarity Function**: Cosine Similarity
# 📚 Model Sources
# Sentence Transformers Documentation
```
Repository: Sentence Transformers on GitHub
Hugging Face Model Card
🛠 Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
# 💡 Usage
```python
Direct Usage (Sentence Transformers)
First, install the required libraries:
pip install -U sentence-transformers torch
Then, load the model and run inference:
from sentence_transformers import SentenceTransformer
import torch
# Load FP16 Quantized Model
model = SentenceTransformer("your_model_name").to("cuda" if torch.cuda.is_available() else "cpu")
# Encode Sentences
sentences = [
"Artificial Intelligence is evolving rapidly.",
"Machine Learning is a subset of AI.",
"This is a random sentence."
]
embeddings = model.encode(sentences)
print(embeddings.shape) # Output: (3, 768)
# Compute Similarity
def get_similarity(emb1, emb2):
return torch.nn.functional.cosine_similarity(torch.tensor(emb1), torch.tensor(emb2), dim=0).item()
similarity_score = get_similarity(embeddings[0], embeddings[1])
print(f"Similarity Score: {similarity_score:.4f}")
```
# 📊 Training Details
```
Training Dataset
Dataset: STS-B (Semantic Textual Similarity Benchmark)
Size: 5,749 training samples
Columns: sentence_0, sentence_1, label
```
**Sample Statistics**
| sentence_0 | sentence_1 | label |
|-----------------------------------------------|----------------------|-------|
| Biostatistics in Public Health | Statistics | 1 |
| Vital Signs: Understanding What the Body Is Telling Us | Data Science | 0 |
| Camino a la Excelencia en Gestión de Proyectos | Cybersecurity | 0 |
```
{
"distance_metric": "SiameseDistanceMetric.COSINE_DISTANCE",
"margin": 0.5,
"size_average": true
}
```
# 🔧 Training Hyperparameters
```
Hyperparameter Value
per_device_train_batch_size 16
per_device_eval_batch_size 16
learning_rate 2e-5
epochs 1
optimizer AdamW
⚙ Framework Versions
Library Version
Python 3.12.7
Sentence Transformers 3.4.1
Transformers 4.49.0
PyTorch 2.5.1+cu124
Accelerate 1.3.0
Datasets 3.2.0
Tokenizers 0.21.0
```
|
TheBlueObserver/Qwen2.5-1.5B-Instruct__huatuo-r128-a128-epoch2 | TheBlueObserver | 2025-03-04T12:43:35Z | 0 | 0 | null | [
"safetensors",
"region:us"
] | null | 2025-03-04T12:43:13Z |
# TheBlueObserver/Qwen2.5-1.5B-Instruct__huatuo-r128-a128-epoch2 Model Card
## LoRA Details
- **Rank**: 128
- **Alpha**: 128
## Training Details
- **Datasets**: huatuo_reasoning
- **Limit**: -1
- **Max Steps**: default
- **Epochs**: 2
|
devendhiran/e2e-finetune | devendhiran | 2025-03-04T12:40:59Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:google/gemma-7b",
"base_model:adapter:google/gemma-7b",
"region:us"
] | null | 2025-03-04T12:39:57Z | ---
base_model: google/gemma-7b
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
Legalaz/11_llambocm4_07_27 | Legalaz | 2025-03-04T12:37:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2203.05482",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-04T12:30:43Z | ---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# top
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* /root/top1
* /root/top2
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /root/top2
parameters:
weight: 0.9096
- model: /root/top1
parameters:
weight: 0.0628
merge_method: linear
dtype: bfloat16
```
|
quanda-bench-test/0921427-default_ShortcutDetection | quanda-bench-test | 2025-03-04T12:36:15Z | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | 2025-03-04T12:36:00Z | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
Darkhn/Unnamed-Exp-70b-v0.6A-3.0bpw-h8-exl2 | Darkhn | 2025-03-04T12:36:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0",
"base_model:merge:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0",
"base_model:LatitudeGames/Wayfarer-Large-70B-Llama-3.3",
"base_model:merge:LatitudeGames/Wayfarer-Large-70B-Llama-3.3",
"base_model:Sao10K/70B-L3.3-Cirrus-x1",
"base_model:merge:Sao10K/70B-L3.3-Cirrus-x1",
"base_model:Sao10K/L3.1-70B-Hanami-x1",
"base_model:merge:Sao10K/L3.1-70B-Hanami-x1",
"base_model:Sao10K/L3.3-70B-Euryale-v2.3",
"base_model:merge:Sao10K/L3.3-70B-Euryale-v2.3",
"base_model:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:merge:SicariusSicariiStuff/Negative_LLAMA_70B",
"base_model:TheDrummer/Anubis-70B-v1",
"base_model:merge:TheDrummer/Anubis-70B-v1",
"base_model:TheSkullery/L3.1x3.3-Hydroblated-R1-70B-v4.4",
"base_model:merge:TheSkullery/L3.1x3.3-Hydroblated-R1-70B-v4.4",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"exl2",
"region:us"
] | text-generation | 2025-03-04T11:54:04Z | ---
base_model:
- EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.0
- Sao10K/L3.1-70B-Hanami-x1
- Sao10K/L3.3-70B-Euryale-v2.3
- LatitudeGames/Wayfarer-Large-70B-Llama-3.3
- TheDrummer/Anubis-70B-v1
- TheSkullery/L3.1x3.3-Hydroblated-R1-70B-v4.4
- SicariusSicariiStuff/Negative_LLAMA_70B
- Sao10K/70B-L3.3-Cirrus-x1
library_name: transformers
tags:
- mergekit
- merge
--- |
TungCan/tuning-sentiment-wonrax | TungCan | 2025-03-04T12:35:47Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:wonrax/phobert-base-vietnamese-sentiment",
"base_model:finetune:wonrax/phobert-base-vietnamese-sentiment",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2025-03-04T12:35:31Z | ---
library_name: transformers
license: mit
base_model: wonrax/phobert-base-vietnamese-sentiment
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: tuning-sentiment-wonrax
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tuning-sentiment-wonrax
This model is a fine-tuned version of [wonrax/phobert-base-vietnamese-sentiment](https://huggingface.co/wonrax/phobert-base-vietnamese-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2168
- Accuracy: 0.9255
- F1: 0.9257
- Precision: 0.9257
- Recall: 0.9282
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 0.4902 | 100 | 0.4551 | 0.8166 | 0.8157 | 0.8267 | 0.8253 |
| 0.5789 | 0.9804 | 200 | 0.3072 | 0.8898 | 0.8901 | 0.8896 | 0.8911 |
| 0.355 | 1.4706 | 300 | 0.3053 | 0.8831 | 0.8831 | 0.8871 | 0.8887 |
| 0.355 | 1.9608 | 400 | 0.2529 | 0.904 | 0.9040 | 0.9049 | 0.9081 |
| 0.2872 | 2.4510 | 500 | 0.2304 | 0.9231 | 0.9231 | 0.9227 | 0.9257 |
| 0.2353 | 2.9412 | 600 | 0.2157 | 0.9255 | 0.9258 | 0.9256 | 0.9281 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
hi16feb24/jmpers-p4-hi16feb24-dsr1-q1-5b | hi16feb24 | 2025-03-04T12:34:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-03-04T12:34:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Qvi/ParkGowonLoonaLoossemble | Qvi | 2025-03-04T12:31:27Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-02-19T06:13:13Z | ---
license: apache-2.0
---
|
fats-fme/a7dd94e5-697d-4005-8067-775372883f22 | fats-fme | 2025-03-04T12:26:55Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Hermes-3-Llama-3.1-8B",
"base_model:adapter:NousResearch/Hermes-3-Llama-3.1-8B",
"license:llama3",
"region:us"
] | null | 2025-03-04T11:55:44Z | ---
library_name: peft
license: llama3
base_model: NousResearch/Hermes-3-Llama-3.1-8B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a7dd94e5-697d-4005-8067-775372883f22
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Hermes-3-Llama-3.1-8B
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 08b402a6be921349_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/08b402a6be921349_train_data.json
type:
field_input: context
field_instruction: question-X
field_output: answer-Y
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: 2
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: fats-fme/a7dd94e5-697d-4005-8067-775372883f22
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 3.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_memory:
0: 70GB
max_steps: 100
micro_batch_size: 1
mlflow_experiment_name: /tmp/08b402a6be921349_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
saves_per_epoch: null
sequence_len: 2048
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a2684d4d-214d-4458-9145-b6268c559f3e
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a2684d4d-214d-4458-9145-b6268c559f3e
warmup_steps: 200
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# a7dd94e5-697d-4005-8067-775372883f22
This model is a fine-tuned version of [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9683
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 200
- training_steps: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 8.6383 | 0.0005 | 1 | 7.7597 |
| 4.9779 | 0.0252 | 50 | 5.1606 |
| 2.1309 | 0.0503 | 100 | 1.9683 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mshojaei77/gemma-2-2b-Usop | mshojaei77 | 2025-03-04T12:26:18Z | 0 | 0 | null | [
"merge",
"mergekit",
"lazymergekit",
"SaisExperiments/Gemma-2-2B-Opus-Instruct",
"allknowingroger/Gemma2Slerp2-2.6B",
"base_model:SaisExperiments/Gemma-2-2B-Opus-Instruct",
"base_model:merge:SaisExperiments/Gemma-2-2B-Opus-Instruct",
"base_model:allknowingroger/Gemma2Slerp2-2.6B",
"base_model:merge:allknowingroger/Gemma2Slerp2-2.6B",
"region:us"
] | null | 2025-03-04T12:23:16Z | ---
base_model:
- SaisExperiments/Gemma-2-2B-Opus-Instruct
- allknowingroger/Gemma2Slerp2-2.6B
tags:
- merge
- mergekit
- lazymergekit
- SaisExperiments/Gemma-2-2B-Opus-Instruct
- allknowingroger/Gemma2Slerp2-2.6B
---
# gemma-2-2b-Usop
gemma-2-2b-Usop is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [SaisExperiments/Gemma-2-2B-Opus-Instruct](https://huggingface.co/SaisExperiments/Gemma-2-2B-Opus-Instruct)
* [allknowingroger/Gemma2Slerp2-2.6B](https://huggingface.co/allknowingroger/Gemma2Slerp2-2.6B)
## 🧩 Configuration
```yaml
models:
- model: google/gemma-2-2b-it
# no parameters necessary for base model
- model: SaisExperiments/Gemma-2-2B-Opus-Instruct
parameters:
density: 0.5
weight: 0.5
- model: allknowingroger/Gemma2Slerp2-2.6B
parameters:
density: 0.5
weight: 0.3
merge_method: ties
base_model: google/gemma-2-2b-it
parameters:
normalize: true
dtype: float16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mshojaei77/gemma-2-2b-Usop"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
Socrates101/flaml-boston | Socrates101 | 2025-03-04T12:25:41Z | 0 | 0 | null | [
"joblib",
"en",
"license:unknown",
"region:us"
] | null | 2025-03-04T11:32:36Z | ---
license: unknown
language:
- en
---
|
nacckuk/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit | nacckuk | 2025-03-04T12:24:52Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"base_model:quantized:unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-03-04T12:22:55Z | ---
base_model: unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** nacckuk
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Llama-8B-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
SAi404/ru_llama_3.2_3b | SAi404 | 2025-03-04T12:24:04Z | 34 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"unsloth",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-12-21T22:03:55Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
MeiKing111/global_23 | MeiKing111 | 2025-03-04T12:23:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2025-03-04T04:47:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
michael-spherex/bert-finetuned-ner | michael-spherex | 2025-03-04T12:23:41Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2025-03-04T11:17:45Z | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9332672613148332
- name: Recall
type: recall
value: 0.9508582968697409
- name: F1
type: f1
value: 0.9419806602200733
- name: Accuracy
type: accuracy
value: 0.9866368399364219
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0606
- Precision: 0.9333
- Recall: 0.9509
- F1: 0.9420
- Accuracy: 0.9866
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0751 | 1.0 | 1756 | 0.0687 | 0.9000 | 0.9325 | 0.9159 | 0.9807 |
| 0.033 | 2.0 | 3512 | 0.0690 | 0.9298 | 0.9445 | 0.9371 | 0.9850 |
| 0.0218 | 3.0 | 5268 | 0.0606 | 0.9333 | 0.9509 | 0.9420 | 0.9866 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.2
- Tokenizers 0.21.0
|
Mozilla/wllama | Mozilla | 2025-03-04T12:23:33Z | 0 | 1 | null | [
"license:apache-2.0",
"region:us"
] | null | 2025-01-14T18:15:54Z | ---
license: apache-2.0
---
|
Subsets and Splits