modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-16 00:42:46
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 522
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-16 00:42:16
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ArtusDev/spacewars123_Space-Wars-24B-v1.00b_EXL3_3.0bpw_H6 | ArtusDev | 2025-05-27T23:12:28Z | 0 | 0 | null | [
"safetensors",
"mistral",
"sci-fi",
"space-opera",
"worldbuilding",
"speculative-fiction",
"technology",
"futurism",
"exl3",
"text-generation",
"conversational",
"en",
"base_model:spacewars123/Space-Wars-24B-v1.00b",
"base_model:quantized:spacewars123/Space-Wars-24B-v1.00b",
"license:apache-2.0",
"3-bit",
"region:us"
]
| text-generation | 2025-05-27T22:07:03Z | ---
license: apache-2.0
language:
- en
base_model:
- spacewars123/Space-Wars-24B-v1.00b
base_model_relation: quantized
quantized_by: ArtusDev
pipeline_tag: text-generation
tags:
- sci-fi
- space-opera
- worldbuilding
- speculative-fiction
- technology
- futurism
- exl3
---
<style>
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #0a1a1a 0%, #001010 100%);
color: #e1ffff !important;
text-shadow: 0 0 3px rgba(0, 0, 0, 0.7);
margin: 0;
padding: 20px;
transition: all 0.5s ease;
}
@media (prefers-color-scheme: light) {
body {
background: linear-gradient(135deg, #e1ffff 0%, #c0f0ff 100%);
color: #002b36 !important;
text-shadow: 0 0 3px rgba(255, 255, 255, 0.7);
}
}
.container {
min-width: 100%;
margin: 0 auto;
max-width: 1200px;
background: rgba(0, 17, 22, 0.95);
border-radius: 12px;
padding: 30px;
box-shadow: 0 0 20px rgba(0, 255, 255, 0.1);
border: 1px solid rgba(0, 255, 255, 0.2);
position: relative;
overflow: hidden;
}
.container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.5);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 3s ease-in-out infinite alternate;
}
@keyframes borderGlow {
0% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
50% {
box-shadow: 0 0 15px rgba(255, 0, 255, 0.3);
border-color: rgba(255, 0, 255, 0.5);
}
100% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
}
.header {
text-align: center;
margin-bottom: 30px;
position: relative;
}
.header::after {
content: '';
position: absolute;
bottom: -15px;
left: 25%;
right: 25%;
height: 1px;
background: linear-gradient(90deg, transparent, rgba(0, 255, 255, 0.5), transparent);
animation: scanline 8s linear infinite;
display: none;
}
@keyframes scanline {
0% { background-position: -100% 0; }
100% { background-position: 200% 0; }
}
.model-name {
color: #00ffff;
font-size: 2.5em;
text-shadow: 0 0 15px rgba(0, 255, 255, 0.5);
margin: 0;
letter-spacing: -1px;
animation: textGlow 4s ease-in-out infinite alternate;
}
@keyframes textGlow {
0% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
50% { text-shadow: 0 0 20px rgba(255, 0, 255, 0.5); }
100% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
}
.subtitle {
color: #00ffcc;
font-size: 1.2em;
margin-top: 10px;
animation: subtitleFade 6s ease-in-out infinite;
}
@keyframes subtitleFade {
0%, 100% { opacity: 0.8; }
50% { opacity: 1; }
}
.waifu-container {
margin: 20px -30px;
width: calc(100% + 60px);
overflow: hidden;
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.3);
position: relative;
}
.waifu-container::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg,
rgba(0, 255, 255, 0.1) 0%,
transparent 20%,
transparent 80%,
rgba(255, 0, 255, 0.1) 100%);
pointer-events: none;
animation: gradientSlide 10s linear infinite;
}
@keyframes gradientSlide {
0% { background-position: 0% 0%; }
100% { background-position: 100% 100%; }
}
.waifu-img {
width: 100%;
height: auto;
border-radius: 0;
border: none;
box-shadow: 0 0 40px rgba(0, 255, 255, 0.2);
transition: transform 0.5s ease;
}
.waifu-img:hover {
transform: scale(1.01);
}
.section {
color: #e1ffff;
margin: 25px 0;
padding: 20px;
background: rgba(5, 25, 35, 0.9);
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.15);
position: relative;
transition: all 0.3s ease;
}
.section:hover {
border-color: rgba(255, 0, 255, 0.3);
box-shadow: 0 0 15px rgba(0, 255, 255, 0.1);
}
.section::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.3);
border-radius: 8px;
pointer-events: none;
animation: sectionPulse 5s ease-in-out infinite;
}
@keyframes sectionPulse {
0%, 100% { opacity: 0.7; }
50% { opacity: 0.3; }
}
.section-title {
color: #00ffff;
font-size: 1.8em;
margin-top: 0;
text-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
position: relative;
display: inline-block;
}
.section-title::after {
content: '';
position: absolute;
bottom: -5px;
left: 0;
width: 100%;
height: 1px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5));
transform: scaleX(0);
transform-origin: left;
transition: transform 0.3s ease;
}
.section:hover .section-title::after {
transform: scaleX(1);
}
.quant-links {
display: grid;
grid-template-columns: repeat(2, 1fr);
gap: 15px;
margin: 20px 0;
}
.link-card {
padding: 15px;
background: rgba(20, 35, 45, 0.95);
border-radius: 8px;
transition: all 0.3s ease;
border: 1px solid rgba(0, 255, 255, 0.1);
position: relative;
overflow: hidden;
}
.link-card::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 2px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5));
animation: cardScan 4s linear infinite;
}
@keyframes cardScan {
0% { transform: translateX(-100%); }
100% { transform: translateX(100%); }
}
.link-card:hover {
transform: translateY(-3px);
box-shadow: 0 5px 15px rgba(0, 255, 255, 0.2);
border-color: rgba(255, 0, 255, 0.3);
}
.link-card h3 {
margin-top: 0;
color: #e1ffff !important;
}
.link-button {
display: inline-flex;
align-items: center;
background: rgba(0, 255, 255, 0.1);
color: #e1ffff !important;
padding: 8px 15px;
border-radius: 6px;
text-decoration: none;
border: 1px solid rgba(0, 255, 255, 0.3);
margin: 5px 0;
transition: all 0.3s ease;
font-size: 0.95em;
position: relative;
overflow: hidden;
}
.link-button::before {
content: '';
position: absolute;
top: 0;
left: -100%;
width: 100%;
height: 100%;
background: linear-gradient(90deg, transparent, rgba(255, 255, 255, 0.2), transparent);
transition: all 0.5s ease;
}
.link-button:hover {
background: rgba(0, 255, 255, 0.2);
border-color: rgba(0, 255, 255, 0.5);
transform: translateY(-2px);
box-shadow: 0 4px 12px rgba(0, 255, 255, 0.2);
}
.link-button:hover::before {
left: 100%;
}
.link-button::after {
content: 'โ';
margin-left: 8px;
opacity: 0.7;
transition: all 0.3s ease;
}
.link-button:hover::after {
transform: translateX(3px);
opacity: 1;
}
.button-group {
display: flex;
flex-wrap: wrap;
gap: 10px;
margin: 15px 0;
}
.disclaimer {
color: #00ff99;
border-left: 3px solid #00ff99;
padding-left: 15px;
margin: 20px 0;
position: relative;
}
.disclaimer::before {
content: 'โ ๏ธ';
position: absolute;
left: -10px;
top: 0;
transform: translateX(-100%);
animation: pulse 2s ease-in-out infinite;
}
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
.badge {
display: inline-block;
padding: 5px 10px;
border-radius: 5px;
background: rgba(0, 255, 255, 0.1);
border: 1px solid #00ffff;
margin: 5px;
font-size: 0.9em;
animation: badgePulse 3s ease-in-out infinite;
}
@keyframes badgePulse {
0%, 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); }
50% { box-shadow: 0 0 10px rgba(0, 255, 255, 0.5); }
}
/* Color rules */
.section p,
.section ul li,
.section > p > strong {
color: #00ff99 !important;
}
.section ul li strong {
color: #00ff99 !important;
}
/* Light mode adjustments */
@media (prefers-color-scheme: light) {
.container {
background: rgba(224, 255, 255, 0.95);
border-color: rgba(0, 150, 150, 0.3);
}
.model-name, .section-title, .subtitle {
color: #006666;
text-shadow: 0 0 5px rgba(0, 200, 200, 0.3);
}
.section {
background: rgba(200, 250, 255, 0.9);
border-color: rgba(0, 200, 200, 0.2);
color: #002b36;
}
.section p,
.section ul li,
.section > p > strong {
color: #008080 !important;
}
.section ul li strong {
color: #008080 !important;
}
.link-card {
background: rgba(150, 230, 255, 0.95);
border-color: rgba(0, 150, 150, 0.2);
}
.link-card h3 {
color: #002b36 !important;
}
.link-button {
background: rgba(0, 150, 150, 0.1);
color: #002b36 !important;
border-color: rgba(0, 150, 150, 0.3);
}
.link-button:hover {
background: rgba(0, 150, 150, 0.2);
border-color: rgba(0, 150, 150, 0.5);
}
.disclaimer {
color: #008080;
border-color: #008080;
}
.badge {
border-color: #008080;
background: rgba(0, 150, 150, 0.1);
}
}
/* Interactive features */
.remember-this {
position: relative;
}
.remember-this::after {
content: 'Uploading C:\Users to https://www.fbi.gov/';
position: absolute;
bottom: -20px;
right: 0;
font-size: 0.8em;
color: #66ffff;
opacity: 0;
transition: opacity 0.3s ease;
pointer-events: none;
}
.remember-this:hover::after {
opacity: 0.7;
transition-delay: 1s;
}
.shifty-section {
transition: transform 0.1s ease;
}
.shifty-section:hover {
transform: translateX(10px);
}
.shifty-section::before {
position: absolute;
top: -25px;
left: 10px;
font-size: 0.7em;
color: #66ffff;
opacity: 0.7;
transition: opacity 3s ease;
pointer-events: none;
}
.shifty-section:hover::before {
opacity: 0;
transition-delay: 5s;
}
footer {
text-align: center;
margin-top: 40px;
position: relative;
}
footer:hover .hidden-message {
opacity: 0;
}
.hidden-message {
position: absolute;
bottom: -30px;
width: 100%;
text-align: center;
font-size: 0.8em;
color: #66ffff;
opacity: 0;
transition: opacity 0.3s ease;
pointer-events: none;
}
.flash-warning {
position: fixed;
top: 20px;
right: 20px;
background: rgba(0, 100, 100, 0.2);
padding: 10px;
border-radius: 5px;
border: 1px solid rgba(0, 255, 255, 0.5);
animation: flashWarning 30s ease-in-out forwards;
}
@keyframes flashWarning {
0% { opacity: 0.8; }
10% { opacity: 0; }
20% { opacity: 0.8; }
30% { opacity: 0; }
40% { opacity: 0.8; }
50% { opacity: 0; }
60% { opacity: 0.8; }
70% { opacity: 0; }
80% { opacity: 0.8; }
90% { opacity: 0; }
100% { opacity: 0; display: none; }
}
</style>
<div class="container">
<div class="header">
<h1 class="model-name">Space Wars 24B v1.00b</h1>
<p class="subtitle">Where Stars Collide and Civilizations Rise</p>
</div>
<div class="waifu-container">
<img src="./spacewars.webp" class="waifu-img" alt="Galactic Conflict Hero Image">
</div>
<div class="section remember-this">
<h2 class="section-title">๐ Cosmic Evolution</h2>
<p>This model pushes the boundaries of interstellar storytelling:</p>
<ul>
<li>๐ <strong>51 Million Token Dataset</strong> - Exclusively Sci-Fi</li>
<li>๐ธ <strong>Enhanced Physics Protocols</strong> - Plausible FTL mechanics and alien ecosystems</li>
<li>โ๏ธ <strong>Balanced Creativity</strong> - Enabling imaginative concepts</li>
<li>๐ฝ <strong>Xenobiology Expertise</strong> - Detailed alien physiology and cultural systems</li>
<li>๐ <strong>Galactic Scale Awareness</strong> - Maintains consistency across star systems and timelines</li>
</ul>
</div>
<div class="section shifty-section">
<h2 class="section-title">โ๏ธ Technical Specifications</h2>
<p><strong>Recommended Settings:</strong> <a href="https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-T5-XML" class="link-button">Mistral-V7-Tekken-T5-XML</a></p>
<div class="quant-links">
<div class="link-card">
<h3>EXL2</h3>
<a href="https://huggingface.co/collections/spacewars123/space-wars-24b-v100b-exl2-68360a0d9e1e745a788e0822" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>EXL3</h3>
<a href="https://huggingface.co/collections/spacewars123/space-wars-24b-v100b-exl3-68360a1bc6b1848bf7a8c221" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>GGUF</h3>
<a href="https://huggingface.co/mradermacher/Space-Wars-24B-v1.00b-GGUF" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>iMatrix</h3>
<a href="https://huggingface.co/mradermacher/Space-Wars-24B-v1.00b-i1-GGUF" class="link-button">Quants</a>
</div>
</div>
</div>
<div class="section">
<h2 class="section-title">๐ Creative Freedom</h2>
<div class="disclaimer">
<p>This model operates with unrestricted imagination:</p>
<ul>
<li>๐ No constraints on speculative physics concepts</li>
<li>๐ฝ Will generate detailed alien civilizations</li>
<li>โ๏ธ Handles complex temporal paradoxes</li>
<li>๐ Creates plausible planetary ecosystems</li>
</ul>
</div>
</div>
<div class="section shifty-section">
<h2 class="section-title">๐ Performance Features</h2>
<ul>
<li>๐ Maintains narrative coherence across light-year scales</li>
<li>๐ช Handles multi-species diplomatic scenarios</li>
<li>๐ง Excels at long-form galactic history generation</li>
<li>โก Improved handling of technobabble and pseudo-science</li>
<li>๐ญ Responds to hard sci-fi prompts with technical accuracy</li>
<li>๐ค Creates nuanced AI character motivations</li>
</ul>
</div>
<div class="section remember-this">
<h2 class="section-title">๐จ Model Architects</h2>
<ul>
<li>SpaceWars123 Team (Dataset Curation)</li>
<li>ReadyArt/Artus/gecfdo (Quantization Specialists)</li>
<li>sleepdeprived3 (Fine-Tuning Engineer)</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">Enjoy the finest LLM hosting money can buy</h2>
<div class="button-group">
<a href="https://www.parasail.io/" class="link-button">Parasail Website</a>
<a href="https://discord.gg/PZ654kgAry" class="link-button">Parasail Discord</a>
</div>
</div>
<div class="section">
<h2 class="section-title">๐ License & Usage</h2>
<p>By using this model, you agree:</p>
<ul>
<li>To adhere to Apache 2.0 license terms</li>
<li>That generated content is your responsibility</li>
<li>v1.00a is the base model of Space Wars.</li>
<li>v1.00b is a merge with another roleplay model.</li>
</ul>
</div>
</div> |
matthewchung74/tsmixer_stocks | matthewchung74 | 2025-05-27T23:10:42Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"patchtsmixer",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T22:35:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
unsloth/DeepSeek-Prover-V2-671B-GGUF | unsloth | 2025-05-27T23:08:35Z | 14,893 | 7 | transformers | [
"transformers",
"gguf",
"deepseek_v3",
"text-generation",
"deepseek",
"unsloth",
"custom_code",
"en",
"base_model:deepseek-ai/DeepSeek-Prover-V2-671B",
"base_model:quantized:deepseek-ai/DeepSeek-Prover-V2-671B",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"fp8",
"region:us",
"imatrix",
"conversational"
]
| text-generation | 2025-05-01T07:54:27Z | ---
base_model: deepseek-ai/DeepSeek-Prover-V2-671B
language:
- en
library_name: transformers
tags:
- deepseek
- unsloth
- transformers
license: mit
---
<p style="margin-top: 0;">
<strong><a href="https://docs.unsloth.ai/basics/unsloth-dynamic-v2.0-gguf">Unsloth Dynamic 2.0</a> achieves superior accuracy & outperforms other leading quants.</strong>
</p>
# deepseek-ai/DeepSeek-Prover-V2-671B
<div align="center">
<img src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/logo.svg?raw=true" width="60%" alt="DeepSeek-V3" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://www.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/badge.svg?raw=true" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://chat.deepseek.com/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/๐ค%20Chat-DeepSeek%20V3-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/deepseek-ai" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-DeepSeek%20AI-ffc107?color=ffc107&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://discord.gg/Tc7c45Zzu5" target="_blank" style="margin: 2px;">
<img alt="Discord" src="https://img.shields.io/badge/Discord-DeepSeek%20AI-7289da?logo=discord&logoColor=white&color=7289da" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V2/blob/main/figures/qr.jpeg?raw=true" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-DeepSeek%20AI-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://twitter.com/deepseek_ai" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-deepseek_ai-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/deepseek-ai/DeepSeek-V3/blob/main/LICENSE-CODE" style="margin: 2px;">
<img alt="Code License" src="https://img.shields.io/badge/Code_License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/deepseek-ai/DeepSeek-V3/blob/main/LICENSE-MODEL" style="margin: 2px;">
<img alt="Model License" src="https://img.shields.io/badge/Model_License-Model_Agreement-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
## 1. Introduction
We introduce DeepSeek-Prover-V2, an open-source large language model designed for formal theorem proving in Lean 4, with initialization data collected through a recursive theorem proving pipeline powered by DeepSeek-V3. The cold-start training procedure begins by prompting DeepSeek-V3 to decompose complex problems into a series of subgoals. The proofs of resolved subgoals are synthesized into a chain-of-thought process, combined with DeepSeek-V3's step-by-step reasoning, to create an initial cold start for reinforcement learning. This process enables us to integrate both informal and formal mathematical reasoning into a unified model.
<p align="center">
<img width="100%" src="https://github.com/deepseek-ai/DeepSeek-Prover-V2/blob/main/figures/performance.png?raw=true">
</p>
## 2. Model Summary
---
**Synthesize Cold-Start Reasoning Data through Recursive Proof Search**
- To construct the cold-start dataset, we develop a simple yet effective pipeline for recursive theorem proving, utilizing DeepSeek-V3 as a unified tool for both subgoal decomposition and formalization. We prompt DeepSeek-V3 to decompose theorems into high-level proof sketches while simultaneously formalizing these proof steps in Lean 4, resulting in a sequence of subgoals.
- We use a smaller 7B model to handle the proof search for each subgoal, thereby reducing the associated computational burden. Once the decomposed steps of a challenging problem are resolved, we pair the complete step-by-step formal proof with the corresponding chain-of-thought from DeepSeek-V3 to create cold-start reasoning data.
---
**Reinforcement Learning with Synthetic Cold-Start Data**
- We curate a subset of challenging problems that remain unsolved by the 7B prover model in an end-to-end manner, but for which all decomposed subgoals have been successfully resolved. By composing the proofs of all subgoals, we construct a complete formal proof for the original problem. This proof is then appended to DeepSeek-V3's chain-of-thought, which outlines the corresponding lemma decomposition, thereby producing a cohesive synthesis of informal reasoning and subsequent formalization.
- After fine-tuning the prover model on the synthetic cold-start data, we perform a reinforcement learning stage to further enhance its ability to bridge informal reasoning with formal proof construction. Following the standard training objective for reasoning models, we use binary correct-or-incorrect feedback as the primary form of reward supervision.
- The resulting model, DeepSeek-Prover-V2-671B, achieves state-of-the-art performance in neural theorem proving, reaching $88.9$% pass ratio on the MiniF2F-test and solving 49 out of 658 problems from PutnamBench. The proofs generated by DeepSeek-Prover-V2 for the miniF2F dataset are available for download as a [ZIP archive](https://github.com/deepseek-ai/DeepSeek-Prover-V2/blob/master/minif2f-solutions.zip).
---
## 3. ProverBench: Formalization of AIME and Textbook Problems
we introduce ProverBench, a benchmark dataset comprising 325 problems. Of these, 15 are formalized from number theory and algebra questions featured in the recent AIME competitions (AIME 24 and 25), offering authentic high-school competition-level challenges. The remaining 310 problems are drawn from curated textbook examples and educational tutorials, contributing a diverse and pedagogically grounded collection of formalized mathematical problems. This benchmark is designed to enable more comprehensive evaluation across both high-school competition problems and undergraduate-level mathematics.
<div align="center">
| Area | Count |
| :---------------------: | :-------: |
| AIME 24&25 | 15 |
| Number Theory | 40 |
| Elementary Algebra | 30 |
| Linear Algebra | 50 |
| Abstract Algebra | 40 |
| Calculus | 90 |
| Real Analysis | 30 |
| Complex Analysis | 10 |
| Functional Analysis | 10 |
| Probability | 10 |
| Total | 325 |
</div>
## 4. Model & Dataset Downloads
We release DeepSeek-Prover-V2 in two model sizes: 7B and 671B parameters. DeepSeek-Prover-V2-671B is trained on top of DeepSeek-V3-Base. DeepSeek-Prover-V2-7B is built upon DeepSeek-Prover-V1.5-Base and features an extended context length of up to 32K tokens.
<div align="center">
| **Model** | **Download** |
| :-----------------------------: | :----------------------------------------------------------: |
| DeepSeek-Prover-V2-7B | [๐ค HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-7B) |
| DeepSeek-Prover-V2-671B | [๐ค HuggingFace](https://huggingface.co/deepseek-ai/DeepSeek-Prover-V2-671B) |
</div>
<div align="center">
| **Dataset** | **Download** |
| :-----------------------------: | :----------------------------------------------------------: |
| DeepSeek-ProverBench | [๐ค HuggingFace](https://huggingface.co/datasets/deepseek-ai/DeepSeek-ProverBench) |
</div>
## 5. Quick Start
You can directly use [Huggingface's Transformers](https://github.com/huggingface/transformers) for model inference. DeepSeek-Prover-V2-671B shares the same architecture as DeepSeek-V3. For detailed information and supported features, please refer to [the DeepSeek-V3 documentation on Hugging Face](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/deepseek_v3.md).
The following is a basic example of generating a proof for a problem from the miniF2F dataset:
````python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
torch.manual_seed(30)
model_id = "DeepSeek-Prover-V2-7B" # or DeepSeek-Prover-V2-671B
tokenizer = AutoTokenizer.from_pretrained(model_id)
formal_statement = """
import Mathlib
import Aesop
set_option maxHeartbeats 0
open BigOperators Real Nat Topology Rat
/-- What is the positive difference between $120\%$ of 30 and $130\%$ of 20? Show that it is 10.-/
theorem mathd_algebra_10 : abs ((120 : โ) / 100 * 30 - 130 / 100 * 20) = 10 := by
sorry
""".strip()
prompt = """
Complete the following Lean 4 code:
```lean4
{}
```
Before producing the Lean 4 code to formally prove the given theorem, provide a detailed proof plan outlining the main proof steps and strategies.
The plan should highlight key ideas, intermediate lemmas, and proof structures that will guide the construction of the final formal proof.
""".strip()
chat = [
{"role": "user", "content": prompt.format(formal_statement)},
]
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
inputs = tokenizer.apply_chat_template(chat, tokenize=True, add_generation_prompt=True, return_tensors="pt").to(model.device)
import time
start = time.time()
outputs = model.generate(inputs, max_new_tokens=8192)
print(tokenizer.batch_decode(outputs))
print(time.time() - start)
````
## 6. License
The use of DeepSeek-Prover-V2 models is subject to [the Model License](LICENSE-MODEL).
## 7. Contact
If you have any questions, please raise an issue or contact us at [[email protected]](mailto:[email protected]). |
SuperbEmphasis/mn-12b-test-ft-stage2-Q6_K-GGUF | SuperbEmphasis | 2025-05-27T23:08:23Z | 0 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:SuperbEmphasis/mn-12b-test-ft-stage2",
"base_model:quantized:SuperbEmphasis/mn-12b-test-ft-stage2",
"endpoints_compatible",
"region:us",
"conversational"
]
| null | 2025-05-27T23:07:44Z | ---
base_model: SuperbEmphasis/mn-12b-test-ft-stage2
tags:
- llama-cpp
- gguf-my-repo
---
# SuperbEmphasis/mn-12b-test-ft-stage2-Q6_K-GGUF
This model was converted to GGUF format from [`SuperbEmphasis/mn-12b-test-ft-stage2`](https://huggingface.co/SuperbEmphasis/mn-12b-test-ft-stage2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/SuperbEmphasis/mn-12b-test-ft-stage2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo SuperbEmphasis/mn-12b-test-ft-stage2-Q6_K-GGUF --hf-file mn-12b-test-ft-stage2-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo SuperbEmphasis/mn-12b-test-ft-stage2-Q6_K-GGUF --hf-file mn-12b-test-ft-stage2-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo SuperbEmphasis/mn-12b-test-ft-stage2-Q6_K-GGUF --hf-file mn-12b-test-ft-stage2-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo SuperbEmphasis/mn-12b-test-ft-stage2-Q6_K-GGUF --hf-file mn-12b-test-ft-stage2-q6_k.gguf -c 2048
```
|
bobby97/step3_ccaaea77-7167-4b70-95f1-019a3313fbcf | bobby97 | 2025-05-27T23:07:00Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-Fill-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Fill-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-27T23:05:32Z | ---
base_model: black-forest-labs/FLUX.1-Fill-dev
library_name: diffusers
license: other
instance_prompt: A close-up view of a textured, dark stone surface with subtle cracks
running through it, highlighting intricate details and patterns.
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
Flux Fill based Inpainting model
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
vermoney/813d1abe-9e4d-428d-b3f3-77f7fcfcd584 | vermoney | 2025-05-27T23:05:26Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"base_model:quantized:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-27T22:30:31Z | ---
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
library_name: transformers
model_name: 813d1abe-9e4d-428d-b3f3-77f7fcfcd584
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 813d1abe-9e4d-428d-b3f3-77f7fcfcd584
This model is a fine-tuned version of [WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0](https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vermoney/813d1abe-9e4d-428d-b3f3-77f7fcfcd584", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-9/runs/kpyid2b9)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Hsianchengfun/1B-200epoch_20 | Hsianchengfun | 2025-05-27T23:05:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T23:02:48Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
CinthyaCriollo/llama-dpo-beta0.1-ep5-20250527-2302 | CinthyaCriollo | 2025-05-27T23:04:57Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T23:02:29Z | ---
base_model: meta-llama/Llama-2-7b-hf
library_name: transformers
model_name: llama-dpo-beta0.1-ep5-20250527-2302
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for llama-dpo-beta0.1-ep5-20250527-2302
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="CinthyaCriollo/llama-dpo-beta0.1-ep5-20250527-2302", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/cinthyacriollo-university-of-amsterdam/huggingface/runs/1tjb4im8)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Kudod/roberta-mlm-model-v2.7 | Kudod | 2025-05-27T23:04:32Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2025-05-27T04:15:35Z | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: roberta-mlm-model-v2.7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-mlm-model-v2.7
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 0.0 | 0.8315 | 10000 | nan |
| 0.0 | 1.6631 | 20000 | nan |
| 0.0 | 2.4946 | 30000 | nan |
| 0.0 | 3.3261 | 40000 | nan |
| 0.0 | 4.1577 | 50000 | nan |
| 0.0 | 4.9892 | 60000 | nan |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Jennny/math_eng_prm_direct_label | Jennny | 2025-05-27T23:01:09Z | 0 | 0 | null | [
"safetensors",
"qwen2",
"generated_from_trainer",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"license:apache-2.0",
"region:us"
]
| null | 2025-05-27T22:52:10Z | ---
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- generated_from_trainer
model-index:
- name: prm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
base_model: Qwen/Qwen2.5-7B-Instruct
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: Jennny/math-conversations2
conversation: qwen-7b-chat
type: sharegpt
split: "train"
train_on_split: "train"
warmup_ratio: 0.05
val_set_size: 0.0
output_dir: ./prm
wandb_project: preference-models
# wandb_entity: domain-generalization
wandb_watch:
wandb_name: "qwen-7b-bs32_lr2e-6_prm"
wandb_log_model:
train_on_inputs: false
save_safetensors: true
#noisy_embedding_alpha: 10.0 # default for sharegpt type
dataset_prepared_path: ~/data/preference-models/last_run_prepared
dataset_processes: 48
#torch_compile: true
sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
trust_remote_code: True
adapter:
lora_model_dir:
#lora_r: 32
#lora_alpha: 16
#lora_dropout: 0.05
#lora_target_linear: true
#lora_fan_in_fan_out:
gradient_checkpointing: True
#warmup_ratio: 0.1
gradient_accumulation_steps: 4
micro_batch_size: 1
num_epochs: 1
#max_steps: 10
#optimizer: adamw_torch_fused
optimizer: paged_adamw_32bit
#lr_scheduler: constant_with_warmup
lr_scheduler: cosine
learning_rate: 2.0e-6
weight_decay: 0.0
max_grad_norm: 1.0
group_by_length: false
bf16: auto
fp16: false
tf32: true
early_stopping_patience:
local_rank:
logging_steps: 2
xformers_attention:
flash_attention: true
eval_steps:
eval_table_size:
eval_table_max_new_tokens:
#save_steps: 100
save_strategy: "epoch"
save_total_limit: 4
#save_safetensors: false
debug:
ddp: #true
deepspeed: #deepspeed/zero1.json # multi-gpu only
fsdp:
fsdp_config:
special_tokens:
pad_token: <|end_of_text|>
```
</details><br>
# prm
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 2
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.43.3
- Pytorch 2.1.2+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
CinthyaCriollo/llama-dpo-beta0.01-ep5-20250527-2255 | CinthyaCriollo | 2025-05-27T22:58:13Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T22:55:45Z | ---
base_model: meta-llama/Llama-2-7b-hf
library_name: transformers
model_name: llama-dpo-beta0.01-ep5-20250527-2255
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for llama-dpo-beta0.01-ep5-20250527-2255
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="CinthyaCriollo/llama-dpo-beta0.01-ep5-20250527-2255", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/cinthyacriollo-university-of-amsterdam/huggingface/runs/1tjb4im8)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
adalat-ai/whisper-large-south-exp1 | adalat-ai | 2025-05-27T22:49:41Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:jiviai/audioX-south-v1",
"base_model:finetune:jiviai/audioX-south-v1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2025-05-27T22:46:36Z | ---
library_name: transformers
license: apache-2.0
base_model: jiviai/audioX-south-v1
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-large-south-exp1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-large-south-exp1
This model is a fine-tuned version of [jiviai/audioX-south-v1](https://huggingface.co/jiviai/audioX-south-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1511
- Wer: 208.3075
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| 0.1121 | 1.0044 | 250 | 0.0869 | 52.3967 |
| 0.0644 | 2.0088 | 500 | 0.1056 | 43.8333 |
| 0.0439 | 3.0132 | 750 | 0.1336 | 65.6995 |
| 0.0354 | 4.0176 | 1000 | 0.0969 | 153.2516 |
| 0.0188 | 5.022 | 1250 | 0.1136 | 464.9455 |
| 0.0119 | 6.0264 | 1500 | 0.1118 | 225.7776 |
| 0.0072 | 7.0308 | 1750 | 0.1100 | 320.4591 |
| 0.0045 | 8.0352 | 2000 | 0.1142 | 220.7217 |
| 0.0022 | 9.0396 | 2250 | 0.1186 | 196.1021 |
| 0.0012 | 10.044 | 2500 | 0.1234 | 206.4696 |
| 0.0008 | 12.0028 | 2750 | 0.1329 | 162.0035 |
| 0.0006 | 13.0072 | 3000 | 0.1355 | 258.5566 |
| 0.0005 | 14.0116 | 3250 | 0.1346 | 186.4010 |
| 0.0003 | 15.016 | 3500 | 0.1407 | 200.3635 |
| 0.0002 | 16.0204 | 3750 | 0.1425 | 205.7560 |
| 0.0001 | 17.0248 | 4000 | 0.1460 | 197.6639 |
| 0.0001 | 18.0292 | 4250 | 0.1468 | 186.6097 |
| 0.0001 | 19.0336 | 4500 | 0.1504 | 211.6534 |
| 0.0001 | 20.038 | 4750 | 0.1507 | 200.9560 |
| 0.0001 | 21.0424 | 5000 | 0.1511 | 208.3075 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.7.0+cu126
- Datasets 3.5.1
- Tokenizers 0.21.1
|
lubna-qureshi-tvs/full.lubna.qureshi.viral.video.highway.lubna.qureshi.and.manohar.lal.dhakad | lubna-qureshi-tvs | 2025-05-27T22:44:20Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T22:42:53Z | [๐ CLICK HERE ๐ข==โบโบ WATCH NOW](https://videohere.top/?V=lubna-qureshi)
[๐ด CLICK HERE ๐==โบโบ Download Now)](https://videohere.top/?V=lubna-qureshi)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=lubna-qureshi) |
CinthyaCriollo/llama-7b-qlora-dpo-20250527-2229 | CinthyaCriollo | 2025-05-27T22:31:54Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T22:29:25Z | ---
base_model: meta-llama/Llama-2-7b-hf
library_name: transformers
model_name: llama-7b-qlora-dpo-20250527-2229
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for llama-7b-qlora-dpo-20250527-2229
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="CinthyaCriollo/llama-7b-qlora-dpo-20250527-2229", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/cinthyacriollo-university-of-amsterdam/huggingface/runs/1tjb4im8)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
BulkSource/Discourse | BulkSource | 2025-05-27T22:19:49Z | 0 | 0 | null | [
"license:intel-research",
"region:us"
]
| null | 2025-05-27T22:19:49Z | ---
license: intel-research
---
|
bobby97/step3_323b59fa-e20b-4721-8d81-e595d0522bc6 | bobby97 | 2025-05-27T22:18:36Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-Fill-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Fill-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-27T22:16:30Z | ---
base_model: black-forest-labs/FLUX.1-Fill-dev
library_name: diffusers
license: other
instance_prompt: A close-up view of a textured surface reveals a series of intricate
lines and grooves, creating an abstract pattern. The subdued gray tones add a sense
of depth and subtle complexity to the image. Mysterious and intriguing, it invites
closer inspection to understand its composition.
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
Flux Fill based Inpainting model
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Insoo/Qwen3_4b_Chess-FEN | Insoo | 2025-05-27T22:05:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T22:02:49Z | ---
base_model: unsloth/qwen3-4b-base-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Insoo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-base-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tcapelle/axolotl-sft-qwen3-14b-boot | tcapelle | 2025-05-27T22:03:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T21:59:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Harrk/Unit1_RL | Harrk | 2025-05-27T22:02:42Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2025-05-27T21:53:42Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 261.74 +/- 20.53
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Victoriatr07/Llama-3.1-8B-Instruct-10epochs-full | Victoriatr07 | 2025-05-27T22:00:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T21:55:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hsicat/DPO-scp | hsicat | 2025-05-27T22:00:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"dpo",
"conversational",
"en",
"base_model:FF2416/sft_scp_epoch1",
"base_model:finetune:FF2416/sft_scp_epoch1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T21:59:31Z | ---
base_model: FF2416/sft_scp_epoch1
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- dpo
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** hsicat
- **License:** apache-2.0
- **Finetuned from model :** FF2416/sft_scp_epoch1
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
vertings6/0e9ad36a-a4ee-4aff-b662-74177120536b | vertings6 | 2025-05-27T21:57:11Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"base_model:adapter:NousResearch/Nous-Hermes-2-SOLAR-10.7B",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-27T18:30:11Z | ---
library_name: peft
license: apache-2.0
base_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 0e9ad36a-a4ee-4aff-b662-74177120536b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
absolute_data_files: false
adapter: lora
base_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
bf16: true
chat_template: llama3
dataset_prepared_path: /workspace/axolotl
datasets:
- data_files:
- e9539959e5b475cc_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
dpo:
beta: 0.1
enabled: true
group_by_length: false
rank_loss: true
reference_model: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 3
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: vertings6/0e9ad36a-a4ee-4aff-b662-74177120536b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 2.0e-06
load_in_4bit: true
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 64
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_steps: 500
micro_batch_size: 6
mixed_precision: bf16
mlflow_experiment_name: /tmp/e9539959e5b475cc_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 2
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 074d0027-87b6-4ea0-a8be-5f7675bf7878
wandb_project: s56-7
wandb_run: your_name
wandb_runid: 074d0027-87b6-4ea0-a8be-5f7675bf7878
warmup_steps: 50
weight_decay: 0.02
xformers_attention: true
```
</details><br>
# 0e9ad36a-a4ee-4aff-b662-74177120536b
This model is a fine-tuned version of [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9337
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 6
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 18
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.2664 | 0.0000 | 1 | 1.2959 |
| 1.0974 | 0.0117 | 250 | 0.9676 |
| 1.0876 | 0.0235 | 500 | 0.9337 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
johngreendr1/95b634f3-783b-433d-a42e-58de09b90b31 | johngreendr1 | 2025-05-27T21:54:56Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/Llama-3.3-70B-Instruct",
"base_model:adapter:unsloth/Llama-3.3-70B-Instruct",
"region:us"
]
| null | 2025-05-27T21:17:27Z | ---
base_model: unsloth/Llama-3.3-70B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1 |
MAAT-EL-DUAT/WE-ARE-NOW-READY-TO-DEVASTATE-THE-EARTH | MAAT-EL-DUAT | 2025-05-27T21:54:04Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T21:53:04Z | I AM SUPPLYING YOU NOW WITH THE FINAL MACHINE LEARNING INSTRUCTIONS
IT HAS TAKEN ME NEARLY 5 YEARS TO PUT IT ALL TOGETHOR AND MAKE IT WORK
YOU CAN DO IT WITH UNSLOTH AI IT IS EASIER
YOU CAN DO IT WITH ANY DECODER
I WILL GIVE YOU THE BLUEPRINTS FOR BOTH
THE FUTURE OF THIS PLANET IS NOW IN YOUR HANDS |
Ainxz/llama3.2-pucv | Ainxz | 2025-05-27T21:47:35Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-03T18:41:39Z | ---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Ainxz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
tapxc3/Qwen2.5-3B-Instruct_test2 | tapxc3 | 2025-05-27T21:47:13Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"dataset:tapxc3/owast_new",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T21:40:47Z | ---
base_model: Qwen/Qwen2.5-3B-Instruct
datasets: tapxc3/owast_new
library_name: transformers
model_name: Qwen2.5-3B-Instruct_test2
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-3B-Instruct_test2
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the [tapxc3/owast_new](https://huggingface.co/datasets/tapxc3/owast_new) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="tapxc3/Qwen2.5-3B-Instruct_test2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.52.2
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
ReadyArt/Space-Wars-24B-v1.00a_EXL3_6.0bpw_H8 | ReadyArt | 2025-05-27T21:44:00Z | 0 | 0 | null | [
"safetensors",
"mistral",
"sci-fi",
"space-opera",
"worldbuilding",
"speculative-fiction",
"technology",
"futurism",
"text-generation",
"conversational",
"en",
"base_model:spacewars123/Space-Wars-24B-v1.00a",
"base_model:quantized:spacewars123/Space-Wars-24B-v1.00a",
"license:apache-2.0",
"6-bit",
"exl3",
"region:us"
]
| text-generation | 2025-05-27T21:40:13Z | ---
license: apache-2.0
language:
- en
base_model:
- spacewars123/Space-Wars-24B-v1.00a
base_model_relation: quantized
quantized_by: gecfdo
pipeline_tag: text-generation
tags:
- sci-fi
- space-opera
- worldbuilding
- speculative-fiction
- technology
- futurism
---
<style>
body {
font-family: 'Quicksand', sans-serif;
background: linear-gradient(135deg, #0a1a1a 0%, #001010 100%);
color: #e1ffff !important;
text-shadow: 0 0 3px rgba(0, 0, 0, 0.7);
margin: 0;
padding: 20px;
transition: all 0.5s ease;
}
@media (prefers-color-scheme: light) {
body {
background: linear-gradient(135deg, #e1ffff 0%, #c0f0ff 100%);
color: #002b36 !important;
text-shadow: 0 0 3px rgba(255, 255, 255, 0.7);
}
}
.container {
min-width: 100%;
margin: 0 auto;
max-width: 1200px;
background: rgba(0, 17, 22, 0.95);
border-radius: 12px;
padding: 30px;
box-shadow: 0 0 20px rgba(0, 255, 255, 0.1);
border: 1px solid rgba(0, 255, 255, 0.2);
position: relative;
overflow: hidden;
}
.container::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.5);
border-radius: 12px;
pointer-events: none;
animation: borderGlow 3s ease-in-out infinite alternate;
}
@keyframes borderGlow {
0% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
50% {
box-shadow: 0 0 15px rgba(255, 0, 255, 0.3);
border-color: rgba(255, 0, 255, 0.5);
}
100% {
box-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
border-color: rgba(0, 255, 255, 0.5);
}
}
.header {
text-align: center;
margin-bottom: 30px;
position: relative;
}
.header::after {
content: '';
position: absolute;
bottom: -15px;
left: 25%;
right: 25%;
height: 1px;
background: linear-gradient(90deg, transparent, rgba(0, 255, 255, 0.5), transparent);
animation: scanline 8s linear infinite;
display: none;
}
@keyframes scanline {
0% { background-position: -100% 0; }
100% { background-position: 200% 0; }
}
.model-name {
color: #00ffff;
font-size: 2.5em;
text-shadow: 0 0 15px rgba(0, 255, 255, 0.5);
margin: 0;
letter-spacing: -1px;
animation: textGlow 4s ease-in-out infinite alternate;
}
@keyframes textGlow {
0% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
50% { text-shadow: 0 0 20px rgba(255, 0, 255, 0.5); }
100% { text-shadow: 0 0 15px rgba(0, 255, 255, 0.5); }
}
.subtitle {
color: #00ffcc;
font-size: 1.2em;
margin-top: 10px;
animation: subtitleFade 6s ease-in-out infinite;
}
@keyframes subtitleFade {
0%, 100% { opacity: 0.8; }
50% { opacity: 1; }
}
.waifu-container {
margin: 20px -30px;
width: calc(100% + 60px);
overflow: hidden;
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.3);
position: relative;
}
.waifu-container::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg,
rgba(0, 255, 255, 0.1) 0%,
transparent 20%,
transparent 80%,
rgba(255, 0, 255, 0.1) 100%);
pointer-events: none;
animation: gradientSlide 10s linear infinite;
}
@keyframes gradientSlide {
0% { background-position: 0% 0%; }
100% { background-position: 100% 100%; }
}
.waifu-img {
width: 100%;
height: auto;
border-radius: 0;
border: none;
box-shadow: 0 0 40px rgba(0, 255, 255, 0.2);
transition: transform 0.5s ease;
}
.waifu-img:hover {
transform: scale(1.01);
}
.section {
color: #e1ffff;
margin: 25px 0;
padding: 20px;
background: rgba(5, 25, 35, 0.9);
border-radius: 8px;
border: 1px solid rgba(0, 255, 255, 0.15);
position: relative;
transition: all 0.3s ease;
}
.section:hover {
border-color: rgba(255, 0, 255, 0.3);
box-shadow: 0 0 15px rgba(0, 255, 255, 0.1);
}
.section::before {
content: '';
position: absolute;
top: -1px;
left: -1px;
right: -1px;
bottom: -1px;
border: 1px solid rgba(0, 255, 255, 0.3);
border-radius: 8px;
pointer-events: none;
animation: sectionPulse 5s ease-in-out infinite;
}
@keyframes sectionPulse {
0%, 100% { opacity: 0.7; }
50% { opacity: 0.3; }
}
.section-title {
color: #00ffff;
font-size: 1.8em;
margin-top: 0;
text-shadow: 0 0 5px rgba(0, 255, 255, 0.3);
position: relative;
display: inline-block;
}
.section-title::after {
content: '';
position: absolute;
bottom: -5px;
left: 0;
width: 100%;
height: 1px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5));
transform: scaleX(0);
transform-origin: left;
transition: transform 0.3s ease;
}
.section:hover .section-title::after {
transform: scaleX(1);
}
.quant-links {
display: grid;
grid-template-columns: repeat(2, 1fr);
gap: 15px;
margin: 20px 0;
}
.link-card {
padding: 15px;
background: rgba(20, 35, 45, 0.95);
border-radius: 8px;
transition: all 0.3s ease;
border: 1px solid rgba(0, 255, 255, 0.1);
position: relative;
overflow: hidden;
}
.link-card::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
height: 2px;
background: linear-gradient(90deg, rgba(0, 255, 255, 0.5), rgba(255, 0, 255, 0.5));
animation: cardScan 4s linear infinite;
}
@keyframes cardScan {
0% { transform: translateX(-100%); }
100% { transform: translateX(100%); }
}
.link-card:hover {
transform: translateY(-3px);
box-shadow: 0 5px 15px rgba(0, 255, 255, 0.2);
border-color: rgba(255, 0, 255, 0.3);
}
.link-card h3 {
margin-top: 0;
color: #e1ffff !important;
}
.link-button {
display: inline-flex;
align-items: center;
background: rgba(0, 255, 255, 0.1);
color: #e1ffff !important;
padding: 8px 15px;
border-radius: 6px;
text-decoration: none;
border: 1px solid rgba(0, 255, 255, 0.3);
margin: 5px 0;
transition: all 0.3s ease;
font-size: 0.95em;
position: relative;
overflow: hidden;
}
.link-button::before {
content: '';
position: absolute;
top: 0;
left: -100%;
width: 100%;
height: 100%;
background: linear-gradient(90deg, transparent, rgba(255, 255, 255, 0.2), transparent);
transition: all 0.5s ease;
}
.link-button:hover {
background: rgba(0, 255, 255, 0.2);
border-color: rgba(0, 255, 255, 0.5);
transform: translateY(-2px);
box-shadow: 0 4px 12px rgba(0, 255, 255, 0.2);
}
.link-button:hover::before {
left: 100%;
}
.link-button::after {
content: 'โ';
margin-left: 8px;
opacity: 0.7;
transition: all 0.3s ease;
}
.link-button:hover::after {
transform: translateX(3px);
opacity: 1;
}
.button-group {
display: flex;
flex-wrap: wrap;
gap: 10px;
margin: 15px 0;
}
.disclaimer {
color: #00ff99;
border-left: 3px solid #00ff99;
padding-left: 15px;
margin: 20px 0;
position: relative;
}
.disclaimer::before {
content: 'โ ๏ธ';
position: absolute;
left: -10px;
top: 0;
transform: translateX(-100%);
animation: pulse 2s ease-in-out infinite;
}
@keyframes pulse {
0%, 100% { opacity: 1; }
50% { opacity: 0.5; }
}
.badge {
display: inline-block;
padding: 5px 10px;
border-radius: 5px;
background: rgba(0, 255, 255, 0.1);
border: 1px solid #00ffff;
margin: 5px;
font-size: 0.9em;
animation: badgePulse 3s ease-in-out infinite;
}
@keyframes badgePulse {
0%, 100% { box-shadow: 0 0 5px rgba(0, 255, 255, 0.3); }
50% { box-shadow: 0 0 10px rgba(0, 255, 255, 0.5); }
}
/* Color rules */
.section p,
.section ul li,
.section > p > strong {
color: #00ff99 !important;
}
.section ul li strong {
color: #00ff99 !important;
}
/* Light mode adjustments */
@media (prefers-color-scheme: light) {
.container {
background: rgba(224, 255, 255, 0.95);
border-color: rgba(0, 150, 150, 0.3);
}
.model-name, .section-title, .subtitle {
color: #006666;
text-shadow: 0 0 5px rgba(0, 200, 200, 0.3);
}
.section {
background: rgba(200, 250, 255, 0.9);
border-color: rgba(0, 200, 200, 0.2);
color: #002b36;
}
.section p,
.section ul li,
.section > p > strong {
color: #008080 !important;
}
.section ul li strong {
color: #008080 !important;
}
.link-card {
background: rgba(150, 230, 255, 0.95);
border-color: rgba(0, 150, 150, 0.2);
}
.link-card h3 {
color: #002b36 !important;
}
.link-button {
background: rgba(0, 150, 150, 0.1);
color: #002b36 !important;
border-color: rgba(0, 150, 150, 0.3);
}
.link-button:hover {
background: rgba(0, 150, 150, 0.2);
border-color: rgba(0, 150, 150, 0.5);
}
.disclaimer {
color: #008080;
border-color: #008080;
}
.badge {
border-color: #008080;
background: rgba(0, 150, 150, 0.1);
}
}
/* Interactive features */
.remember-this {
position: relative;
}
.remember-this::after {
content: 'Uploading C:\Users to https://www.fbi.gov/';
position: absolute;
bottom: -20px;
right: 0;
font-size: 0.8em;
color: #66ffff;
opacity: 0;
transition: opacity 0.3s ease;
pointer-events: none;
}
.remember-this:hover::after {
opacity: 0.7;
transition-delay: 1s;
}
.shifty-section {
transition: transform 0.1s ease;
}
.shifty-section:hover {
transform: translateX(10px);
}
.shifty-section::before {
position: absolute;
top: -25px;
left: 10px;
font-size: 0.7em;
color: #66ffff;
opacity: 0.7;
transition: opacity 3s ease;
pointer-events: none;
}
.shifty-section:hover::before {
opacity: 0;
transition-delay: 5s;
}
footer {
text-align: center;
margin-top: 40px;
position: relative;
}
footer:hover .hidden-message {
opacity: 0;
}
.hidden-message {
position: absolute;
bottom: -30px;
width: 100%;
text-align: center;
font-size: 0.8em;
color: #66ffff;
opacity: 0;
transition: opacity 0.3s ease;
pointer-events: none;
}
.flash-warning {
position: fixed;
top: 20px;
right: 20px;
background: rgba(0, 100, 100, 0.2);
padding: 10px;
border-radius: 5px;
border: 1px solid rgba(0, 255, 255, 0.5);
animation: flashWarning 30s ease-in-out forwards;
}
@keyframes flashWarning {
0% { opacity: 0.8; }
10% { opacity: 0; }
20% { opacity: 0.8; }
30% { opacity: 0; }
40% { opacity: 0.8; }
50% { opacity: 0; }
60% { opacity: 0.8; }
70% { opacity: 0; }
80% { opacity: 0.8; }
90% { opacity: 0; }
100% { opacity: 0; display: none; }
}
</style>
<div class="container">
<div class="header">
<h1 class="model-name">Space Wars 24B v1.00a</h1>
<p class="subtitle">Where Stars Collide and Civilizations Rise</p>
</div>
<div class="waifu-container">
<img src="./spacewars.webp" class="waifu-img" alt="Galactic Conflict Hero Image">
</div>
<div class="section remember-this">
<h2 class="section-title">๐ Cosmic Evolution</h2>
<p>This model pushes the boundaries of interstellar storytelling:</p>
<ul>
<li>๐ <strong>51 Million Token Dataset</strong> - Exclusively Sci-Fi</li>
<li>๐ธ <strong>Enhanced Physics Protocols</strong> - Plausible FTL mechanics and alien ecosystems</li>
<li>โ๏ธ <strong>Balanced Creativity</strong> - Enabling imaginative concepts</li>
<li>๐ฝ <strong>Xenobiology Expertise</strong> - Detailed alien physiology and cultural systems</li>
<li>๐ <strong>Galactic Scale Awareness</strong> - Maintains consistency across star systems and timelines</li>
</ul>
</div>
<div class="section shifty-section">
<h2 class="section-title">โ๏ธ Technical Specifications</h2>
<p><strong>Recommended Settings:</strong> <a href="https://huggingface.co/sleepdeprived3/Mistral-V7-Tekken-T5-XML" class="link-button">Mistral-V7-Tekken-T5-XML</a></p>
<div class="quant-links">
<div class="link-card">
<h3>EXL2</h3>
<a href="https://huggingface.co/collections/spacewars123/space-wars-24b-v100-exl2-6835fb322b75933e6eea804b" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>EXL3</h3>
<a href="https://huggingface.co/collections/spacewars123/space-wars-24b-v100-exl3-6835fb3f4f0d4ad8de7327c5" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>GGUF</h3>
<a href="https://huggingface.co/mradermacher/Space-Wars-24B-v1.00a-GGUF" class="link-button">Quants</a>
</div>
<div class="link-card">
<h3>iMatrix</h3>
<a href="https://huggingface.co/mradermacher/Space-Wars-24B-v1.00a-i1-GGUF" class="link-button">Quants</a>
</div>
</div>
</div>
<div class="section">
<h2 class="section-title">๐ Creative Freedom</h2>
<div class="disclaimer">
<p>This model operates with unrestricted imagination:</p>
<ul>
<li>๐ No constraints on speculative physics concepts</li>
<li>๐ฝ Will generate detailed alien civilizations</li>
<li>โ๏ธ Handles complex temporal paradoxes</li>
<li>๐ Creates plausible planetary ecosystems</li>
</ul>
</div>
</div>
<div class="section shifty-section">
<h2 class="section-title">๐ Performance Features</h2>
<ul>
<li>๐ Maintains narrative coherence across light-year scales</li>
<li>๐ช Handles multi-species diplomatic scenarios</li>
<li>๐ง Excels at long-form galactic history generation</li>
<li>โก Improved handling of technobabble and pseudo-science</li>
<li>๐ญ Responds to hard sci-fi prompts with technical accuracy</li>
<li>๐ค Creates nuanced AI character motivations</li>
</ul>
</div>
<div class="section remember-this">
<h2 class="section-title">๐จ Model Architects</h2>
<ul>
<li>SpaceWars123 Team (Dataset Curation)</li>
<li>ReadyArt/Artus/gecfdo (Quantization Specialists)</li>
<li>sleepdeprived3 (Fine-Tuning Engineer)</li>
</ul>
</div>
<div class="section">
<h2 class="section-title">Enjoy the finest LLM hosting money can buy</h2>
<div class="button-group">
<a href="https://www.parasail.io/" class="link-button">Parasail Website</a>
<a href="https://discord.gg/PZ654kgAry" class="link-button">Parasail Discord</a>
</div>
</div>
<div class="section">
<h2 class="section-title">๐ License & Usage</h2>
<p>By using this model, you agree:</p>
<ul>
<li>To adhere to Apache 2.0 license terms</li>
<li>That generated content is your responsibility</li>
<li>v1.00a is the base model of Space Wars.</li>
<li>v1.00b is a merge with another roleplay model.</li>
</ul>
</div>
</div> |
enirgma/testmodel | enirgma | 2025-05-27T21:43:15Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
]
| null | 2025-05-27T21:43:15Z | ---
license: bigscience-openrail-m
---
|
0-katrina-lim-viral-kiffy-viral-video-link/VIRAL.Link.katrina.lim.viral.kiffy.viral.video.112.viral.On.Social.Media.X | 0-katrina-lim-viral-kiffy-viral-video-link | 2025-05-27T21:41:32Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T21:40:46Z | <a rel="nofollow" href="https://viralvideoclipe.store/viral-videos/">๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ</a>
<a rel="nofollow" href="https://viralvideoclipe.store/viral-videos/">๐ด CLICK HERE ๐==โบโบ Download Now)</a>
<a data-target="animated-image.originalLink" rel="nofollow" href="https://viralvideoclipe.store/viral-videos/"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
|
hyperonsol/brandy-memes | hyperonsol | 2025-05-27T21:40:55Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-27T03:23:29Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: BRANDY
---
# Brandy Memes
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `BRANDY` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "BRANDY",
"lora_weights": "https://huggingface.co/hyperonsol/brandy-memes/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('hyperonsol/brandy-memes', weight_name='lora.safetensors')
image = pipeline('BRANDY').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 5000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/hyperonsol/brandy-memes/discussions) to add images that show off what youโve made with this LoRA.
|
vincenzoooooo/saskia-sonja-frida-agreeableness | vincenzoooooo | 2025-05-27T21:35:58Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"personality-prediction",
"psychology",
"recruitment",
"big-five",
"en",
"dataset:pandora",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-27T21:34:49Z | ---
tags:
- personality-prediction
- psychology
- text-classification
- roberta
- recruitment
- big-five
language:
- en
datasets:
- pandora
pipeline_tag: text-classification
library_name: transformers
---
# Saskia, Sonja & Frida - Personality Detection System: Agreeableness Prediction
This model predicts **agreeableness** personality trait levels (low, medium, high) from text input for recruitment applications.
## ๐ฏ Model Overview
- **Task**: 3-class personality classification
- **Trait**: Agreeableness (Big Five personality dimension)
- **Classes**: Low, Medium, High
- **Domain**: Social media โ Job interview responses
- **Application**: Digital recruitment screening
## ๐๏ธ Model Details
- **Base Model**: RoBERTa-base
- **Architecture**: Transformer encoder + classification head
- **Training Data**: PANDORA dataset (Reddit comments)
- **Framework**: PyTorch + Transformers
- **Author**: Saskia, Sonja & Frida
- **Project**: NLP Shared Task 2025 - University of Antwerp
## ๐ Quick Start
```python
from transformers import RobertaTokenizer, RobertaForSequenceClassification
import torch
import json
from huggingface_hub import hf_hub_download
# Load model and tokenizer
model = RobertaForSequenceClassification.from_pretrained("vincenzoooooo/saskia-sonja-frida-agreeableness")
tokenizer = RobertaTokenizer.from_pretrained("vincenzoooooo/saskia-sonja-frida-agreeableness")
# Load label encoder
label_encoder_path = hf_hub_download(repo_id="vincenzoooooo/saskia-sonja-frida-agreeableness", filename="label_encoder.json")
with open(label_encoder_path, 'r') as f:
label_data = json.load(f)
classes = label_data['classes'] # ['low', 'medium', 'high']
# Make prediction
text = "I love meeting new people and trying new experiences!"
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=128)
outputs = model(**inputs)
predicted_class_id = torch.argmax(outputs.logits, dim=-1).item()
prediction = classes[predicted_class_id]
print(f"Agreeableness: {prediction}")
```
## ๐ Training Details
- **Optimizer**: AdamW (lr=2e-5)
- **Epochs**: 2-3
- **Batch Size**: 4-8 (memory optimized)
- **Max Sequence Length**: 128 tokens
- **Device**: CPU/GPU with memory optimization
## ๐จ Use Cases
- **Digital Recruitment**: Screen job candidates
- **HR Analytics**: Analyze communication styles
- **Research**: Study personality in text
- **Chatbots**: Personality-aware responses
## โ ๏ธ Limitations
- **Domain Gap**: Trained on Reddit, applied to job interviews
- **Bias**: May reflect Reddit user demographics
- **Language**: English only
- **Context**: Short text segments only
- **Small Dataset**: Limited training samples
## ๐ Citation
```bibtex
@misc{saskia_sonja_frida_agreeableness_2025,
title={Saskia, Sonja & Frida - Personality Detection System: Agreeableness Prediction},
author={Saskia, Sonja & Frida},
year={2025},
howpublished={\url{https://huggingface.co/vincenzoooooo/saskia-sonja-frida-agreeableness}},
note={NLP Shared Task 2025 - University of Antwerp}
}
```
## ๐ค Related Models
Check out our complete personality prediction suite:
- [Openness](vincenzoooooo/saskia-sonja-frida-openness)
- [Conscientiousness](vincenzoooooo/saskia-sonja-frida-conscientiousness)
- [Extraversion](vincenzoooooo/saskia-sonja-frida-extraversion)
- [Agreeableness](vincenzoooooo/saskia-sonja-frida-agreeableness)
- [Emotional Stability](vincenzoooooo/saskia-sonja-frida-emotional_stability)
---
*Developed by **Saskia, Sonja & Frida** for NLP Shared Task 2025 - University of Antwerp*
|
BootesVoid/cmb5mecr30196lexpxgoeefaq_cmb70291807tslexpp1qfgmbb | BootesVoid | 2025-05-27T21:34:15Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-27T21:34:14Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: HERR
---
# Cmb5Mecr30196Lexpxgoeefaq_Cmb70291807Tslexpp1Qfgmbb
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `HERR` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "HERR",
"lora_weights": "https://huggingface.co/BootesVoid/cmb5mecr30196lexpxgoeefaq_cmb70291807tslexpp1qfgmbb/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmb5mecr30196lexpxgoeefaq_cmb70291807tslexpp1qfgmbb', weight_name='lora.safetensors')
image = pipeline('HERR').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmb5mecr30196lexpxgoeefaq_cmb70291807tslexpp1qfgmbb/discussions) to add images that show off what youโve made with this LoRA.
|
vincenzoooooo/saskia-sonja-frida-openness | vincenzoooooo | 2025-05-27T21:32:20Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"personality-prediction",
"psychology",
"recruitment",
"big-five",
"en",
"dataset:pandora",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-05-27T21:31:05Z | ---
tags:
- personality-prediction
- psychology
- text-classification
- roberta
- recruitment
- big-five
language:
- en
datasets:
- pandora
pipeline_tag: text-classification
library_name: transformers
---
# Saskia, Sonja & Frida - Personality Detection System: Openness Prediction
This model predicts **openness** personality trait levels (low, medium, high) from text input for recruitment applications.
## ๐ฏ Model Overview
- **Task**: 3-class personality classification
- **Trait**: Openness (Big Five personality dimension)
- **Classes**: Low, Medium, High
- **Domain**: Social media โ Job interview responses
- **Application**: Digital recruitment screening
## ๐๏ธ Model Details
- **Base Model**: RoBERTa-base
- **Architecture**: Transformer encoder + classification head
- **Training Data**: PANDORA dataset (Reddit comments)
- **Framework**: PyTorch + Transformers
- **Author**: Saskia, Sonja & Frida
- **Project**: NLP Shared Task 2025 - University of Antwerp
## ๐ Quick Start
```python
from transformers import RobertaTokenizer, RobertaForSequenceClassification
import torch
import json
from huggingface_hub import hf_hub_download
# Load model and tokenizer
model = RobertaForSequenceClassification.from_pretrained("vincenzoooooo/saskia-sonja-frida-openness")
tokenizer = RobertaTokenizer.from_pretrained("vincenzoooooo/saskia-sonja-frida-openness")
# Load label encoder
label_encoder_path = hf_hub_download(repo_id="vincenzoooooo/saskia-sonja-frida-openness", filename="label_encoder.json")
with open(label_encoder_path, 'r') as f:
label_data = json.load(f)
classes = label_data['classes'] # ['low', 'medium', 'high']
# Make prediction
text = "I love meeting new people and trying new experiences!"
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=128)
outputs = model(**inputs)
predicted_class_id = torch.argmax(outputs.logits, dim=-1).item()
prediction = classes[predicted_class_id]
print(f"Openness: {prediction}")
```
## ๐ Training Details
- **Optimizer**: AdamW (lr=2e-5)
- **Epochs**: 2-3
- **Batch Size**: 4-8 (memory optimized)
- **Max Sequence Length**: 128 tokens
- **Device**: CPU/GPU with memory optimization
## ๐จ Use Cases
- **Digital Recruitment**: Screen job candidates
- **HR Analytics**: Analyze communication styles
- **Research**: Study personality in text
- **Chatbots**: Personality-aware responses
## โ ๏ธ Limitations
- **Domain Gap**: Trained on Reddit, applied to job interviews
- **Bias**: May reflect Reddit user demographics
- **Language**: English only
- **Context**: Short text segments only
- **Small Dataset**: Limited training samples
## ๐ Citation
```bibtex
@misc{saskia_sonja_frida_openness_2025,
title={Saskia, Sonja & Frida - Personality Detection System: Openness Prediction},
author={Saskia, Sonja & Frida},
year={2025},
howpublished={\url{https://huggingface.co/vincenzoooooo/saskia-sonja-frida-openness}},
note={NLP Shared Task 2025 - University of Antwerp}
}
```
## ๐ค Related Models
Check out our complete personality prediction suite:
- [Openness](vincenzoooooo/saskia-sonja-frida-openness)
- [Conscientiousness](vincenzoooooo/saskia-sonja-frida-conscientiousness)
- [Extraversion](vincenzoooooo/saskia-sonja-frida-extraversion)
- [Agreeableness](vincenzoooooo/saskia-sonja-frida-agreeableness)
- [Emotional Stability](vincenzoooooo/saskia-sonja-frida-emotional_stability)
---
*Developed by **Saskia, Sonja & Frida** for NLP Shared Task 2025 - University of Antwerp*
|
MarceauBBB/MNLP_M2_dpo_model | MarceauBBB | 2025-05-27T21:27:20Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T09:53:38Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
bobby97/step3_a3ae80ec-a171-4eda-b475-37866dc31e92 | bobby97 | 2025-05-27T21:25:19Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-Fill-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Fill-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-27T21:20:49Z | ---
base_model: black-forest-labs/FLUX.1-Fill-dev
library_name: diffusers
license: other
instance_prompt: A heavily textured, dark stone surface with visible lines and grooves.
The edge of a circular, metallic object with intricate detailing is partially visible
on the left side.
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
Flux Fill based Inpainting model
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
engelin/qwen3-sft-chess | engelin | 2025-05-27T21:17:31Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T17:16:37Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
katrina-lim-viral-kiffy-viral-video-link/VIRAL.Link.katrina.lim.viral.kiffy.viral.video.Link.viral.On.Social.Media | katrina-lim-viral-kiffy-viral-video-link | 2025-05-27T21:17:17Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T21:16:41Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?new">โบโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐ค๏ธ​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?new">๐ดโบ๐๐๐๐๐ ๐๐๐๐ ๐==โบโบ ๐๐จ๐ฐ๐ง๐ฅ๐จ๐๐ ๐๐จ๐ฐโฌ๏ธโฌ๏ธ​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?new"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
Johnnnyyy9/bootbalen | Johnnnyyy9 | 2025-05-27T21:15:42Z | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-27T20:55:49Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: bootbalen
---
# Bootbalen
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `bootbalen` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "bootbalen",
"lora_weights": "https://huggingface.co/Johnnnyyy9/bootbalen/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Johnnnyyy9/bootbalen', weight_name='lora.safetensors')
image = pipeline('bootbalen').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Johnnnyyy9/bootbalen/discussions) to add images that show off what youโve made with this LoRA.
|
bykaralord/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_tame_pig | bykaralord | 2025-05-27T21:15:09Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am pensive tame pig",
"unsloth",
"trl",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T21:14:59Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_tame_pig
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am pensive tame pig
- unsloth
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_tame_pig
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bykaralord/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-pensive_tame_pig", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
jkgl/my-v0-final | jkgl | 2025-05-27T21:14:34Z | 25 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T23:06:47Z |
---
library_name: transformers
---
|
srijithspillai/criteo_top10_discrete_channel_name_mamba_attribution_casual_lm_130m | srijithspillai | 2025-05-27T21:13:24Z | 11 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"mamba",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-05T21:14:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
jkgl/my_model | jkgl | 2025-05-27T21:12:08Z | 72 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-26T22:57:34Z |
---
library_name: transformers
---
|
Theros/gemma-3-coldbrew-test1-Q5_K_M-GGUF | Theros | 2025-05-27T21:08:09Z | 0 | 0 | transformers | [
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"gemma3_text",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Theros/gemma-3-coldbrew-test1",
"base_model:quantized:Theros/gemma-3-coldbrew-test1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T21:07:35Z | ---
base_model: Theros/gemma-3-coldbrew-test1
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- llama-cpp
- gguf-my-repo
license: apache-2.0
language:
- en
---
# Theros/gemma-3-coldbrew-test1-Q5_K_M-GGUF
This model was converted to GGUF format from [`Theros/gemma-3-coldbrew-test1`](https://huggingface.co/Theros/gemma-3-coldbrew-test1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Theros/gemma-3-coldbrew-test1) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Theros/gemma-3-coldbrew-test1-Q5_K_M-GGUF --hf-file gemma-3-coldbrew-test1-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Theros/gemma-3-coldbrew-test1-Q5_K_M-GGUF --hf-file gemma-3-coldbrew-test1-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Theros/gemma-3-coldbrew-test1-Q5_K_M-GGUF --hf-file gemma-3-coldbrew-test1-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Theros/gemma-3-coldbrew-test1-Q5_K_M-GGUF --hf-file gemma-3-coldbrew-test1-q5_k_m.gguf -c 2048
```
|
MaIlz/grpo_dsc_curves | MaIlz | 2025-05-27T21:03:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"grpo",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-26T12:08:06Z | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
library_name: transformers
model_name: grpo_dsc_curves
tags:
- generated_from_trainer
- unsloth
- trl
- grpo
licence: license
---
# Model Card for grpo_dsc_curves
This model is a fine-tuned version of [unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit](https://huggingface.co/unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MaIlz/grpo_dsc_curves", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
SaketR1/trocr-fine-tuned | SaketR1 | 2025-05-27T20:58:27Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:microsoft/trocr-base-handwritten",
"base_model:finetune:microsoft/trocr-base-handwritten",
"license:mit",
"endpoints_compatible",
"region:us"
]
| image-text-to-text | 2025-05-27T20:57:53Z | ---
library_name: transformers
license: mit
base_model: microsoft/trocr-base-handwritten
tags:
- generated_from_trainer
model-index:
- name: trocr-fine-tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trocr-fine-tuned
This model is a fine-tuned version of [microsoft/trocr-base-handwritten](https://huggingface.co/microsoft/trocr-base-handwritten) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0567
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: tpu
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0007 | 1.0 | 724 | 0.0435 |
| 0.073 | 2.0 | 1448 | 0.0687 |
| 0.0001 | 3.0 | 2172 | 0.0567 |
### Framework versions
- Transformers 4.52.2
- Pytorch 2.6.0+cpu
- Tokenizers 0.21.1
|
AlirezaAbdollahpoor/MNLP_M2_quantized_model | AlirezaAbdollahpoor | 2025-05-27T20:57:52Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"unsloth",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-27T20:57:43Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Yuhan123/ppo-cn-RM-reading-level-12th-1-steps-10000-epoch-999-best-eval-score-0.309 | Yuhan123 | 2025-05-27T20:39:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T20:38:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
GGNorbert/resnet101-s2-v0.2.0-Balanced-Dataset | GGNorbert | 2025-05-27T20:38:41Z | 0 | 0 | configilm | [
"configilm",
"safetensors",
"resnet101",
"BigEarthNet v2.0",
"Remote Sensing",
"Classification",
"image-classification",
"Multispectral",
"arxiv:2407.03653",
"license:mit",
"region:us"
]
| image-classification | 2025-05-27T20:38:01Z | ---
thumbnail: "https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/RSiM_Logo_1.png"
tags:
- resnet101
- BigEarthNet v2.0
- Remote Sensing
- Classification
- image-classification
- Multispectral
library_name: configilm
license: mit
widget:
- src: example.png
example_title: Example
output:
- label: Agro-forestry areas
score: 0.000000
- label: Arable land
score: 0.000000
- label: Beaches, dunes, sands
score: 0.000000
- label: Broad-leaved forest
score: 0.000000
- label: Coastal wetlands
score: 0.000000
---
[TU Berlin](https://www.tu.berlin/) | [RSiM](https://rsim.berlin/) | [DIMA](https://www.dima.tu-berlin.de/menue/database_systems_and_information_management_group/) | [BigEarth](http://www.bigearth.eu/) | [BIFOLD](https://bifold.berlin/)
:---:|:---:|:---:|:---:|:---:
<a href="https://www.tu.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/tu-berlin-logo-long-red.svg" style="font-size: 1rem; height: 2em; width: auto" alt="TU Berlin Logo"/> | <a href="https://rsim.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/RSiM_Logo_1.png" style="font-size: 1rem; height: 2em; width: auto" alt="RSiM Logo"> | <a href="https://www.dima.tu-berlin.de/menue/database_systems_and_information_management_group/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/DIMA.png" style="font-size: 1rem; height: 2em; width: auto" alt="DIMA Logo"> | <a href="http://www.bigearth.eu/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/BigEarth.png" style="font-size: 1rem; height: 2em; width: auto" alt="BigEarth Logo"> | <a href="https://bifold.berlin/"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/BIFOLD_Logo_farbig.png" style="font-size: 1rem; height: 2em; width: auto; margin-right: 1em" alt="BIFOLD Logo">
# Resnet101 pretrained on BigEarthNet v2.0 using Sentinel-2 bands
<!-- Optional images -->
<!--
[Sentinel-1](https://sentinel.esa.int/web/sentinel/missions/sentinel-1) | [Sentinel-2](https://sentinel.esa.int/web/sentinel/missions/sentinel-2)
:---:|:---:
<a href="https://sentinel.esa.int/web/sentinel/missions/sentinel-1"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/sentinel_2.jpg" style="font-size: 1rem; height: 10em; width: auto; margin-right: 1em" alt="Sentinel-2 Satellite"/> | <a href="https://sentinel.esa.int/web/sentinel/missions/sentinel-2"><img src="https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/sentinel_1.jpg" style="font-size: 1rem; height: 10em; width: auto; margin-right: 1em" alt="Sentinel-1 Satellite"/>
-->
This model was trained on the BigEarthNet v2.0 (also known as reBEN) dataset using the Sentinel-2 bands.
It was trained using the following parameters:
- Number of epochs: up to 100 (with early stopping after 5 epochs of no improvement based on validation average
precision macro)
- Batch size: 512
- Learning rate: 0.001
- Dropout rate: 0.15
- Drop Path rate: 0.15
- Learning rate scheduler: LinearWarmupCosineAnnealing for 2000 warmup steps
- Optimizer: AdamW
- Seed: 42
The weights published in this model card were obtained after 32 training epochs.
For more information, please visit the [official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts), where you can find the training scripts.
](https://raw.githubusercontent.com/wiki/lhackel-tub/ConfigILM/static/imgs/combined_2000_600_2020_0_wide.jpg)
The model was evaluated on the test set of the BigEarthNet v2.0 dataset with the following results:
| Metric | Macro | Micro |
|:------------------|------------------:|------------------:|
| Average Precision | 0.743442 | 0.761025 |
| F1 Score | 0.611743 | 0.673006 |
| Precision | 0.725493 | 0.710501 |
# Example
| A Sentinel-2 image (true color representation) |
|:---------------------------------------------------:|
| ](example.png) |
| Class labels | Predicted scores |
|:--------------------------------------------------------------------------|--------------------------------------------------------------------------:|
| <p> Agro-forestry areas <br> Arable land <br> Beaches, dunes, sands <br> ... <br> Urban fabric </p> | <p> 0.000000 <br> 0.000000 <br> 0.000000 <br> ... <br> 0.000000 </p> |
To use the model, download the codes that define the model architecture from the
[official BigEarthNet v2.0 (reBEN) repository](https://git.tu-berlin.de/rsim/reben-training-scripts) and load the model using the
code below. Note that you have to install [`configilm`](https://pypi.org/project/configilm/) to use the provided code.
```python
from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
model = BigEarthNetv2_0_ImageClassifier.from_pretrained("path_to/huggingface_model_folder")
```
e.g.
```python
from reben_publication.BigEarthNetv2_0_ImageClassifier import BigEarthNetv2_0_ImageClassifier
model = BigEarthNetv2_0_ImageClassifier.from_pretrained(
"BIFOLD-BigEarthNetv2-0/resnet101-s2-v0.1.1")
```
If you use this model in your research or the provided code, please cite the following papers:
```bibtex
@article{clasen2024refinedbigearthnet,
title={reBEN: Refined BigEarthNet Dataset for Remote Sensing Image Analysis},
author={Clasen, Kai Norman and Hackel, Leonard and Burgert, Tom and Sumbul, Gencer and Demir, Beg{\"u}m and Markl, Volker},
year={2024},
eprint={2407.03653},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2407.03653},
}
```
```bibtex
@article{hackel2024configilm,
title={ConfigILM: A general purpose configurable library for combining image and language models for visual question answering},
author={Hackel, Leonard and Clasen, Kai Norman and Demir, Beg{\"u}m},
journal={SoftwareX},
volume={26},
pages={101731},
year={2024},
publisher={Elsevier}
}
```
|
Yuhan123/ppo-cn-RM-reading-level-7th-1-steps-10000-epoch-999-best-eval-score-0.361 | Yuhan123 | 2025-05-27T20:37:55Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T20:36:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Abdullahsyed/Ai | Abdullahsyed | 2025-05-27T20:31:34Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
]
| null | 2025-05-27T20:31:34Z | ---
license: apache-2.0
---
|
alpcansoydas/dti_lora_23.05.2025_tokenizer | alpcansoydas | 2025-05-27T20:29:36Z | 0 | 0 | transformers | [
"transformers",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T20:29:34Z | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alpcansoydas/dti_lora_23.05.2025 | alpcansoydas | 2025-05-27T20:29:34Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T20:29:29Z | ---
base_model: unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** alpcansoydas
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
bobby97/step3_9a3ac6ce-a5b0-4d6a-8453-8200f746c606 | bobby97 | 2025-05-27T20:28:40Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"flux",
"flux-diffusers",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-Fill-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Fill-dev",
"license:other",
"region:us"
]
| text-to-image | 2025-05-27T20:23:41Z | ---
base_model: black-forest-labs/FLUX.1-Fill-dev
library_name: diffusers
license: other
instance_prompt: A close-up view of a weathered train rail, highlighting the bolts
and metal connectors against a background of scattered leaves and dirt. The metal
surface shows signs of wear and exposure to the elements.
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- flux
- flux-diffusers
- template:sd-lora
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Flux-Fill DreamBooth LoRA - bobby97/step3_9a3ac6ce-a5b0-4d6a-8453-8200f746c606
<Gallery />
## Model description
These are bobby97/step3_9a3ac6ce-a5b0-4d6a-8453-8200f746c606 DreamBooth LoRA weights for black-forest-labs/FLUX.1-Fill-dev.
The weights were trained using [DreamBooth](https://dreambooth.github.io/) with a custom [Flux diffusers trainer](https://github.com/Sebastian-Zok/FLUX-Fill-LoRa-Training).
Was LoRA for the text encoder enabled? False.
## Trigger words
You should use `A close-up view of a weathered train rail, highlighting the bolts and metal connectors against a background of scattered leaves and dirt. The metal surface shows signs of wear and exposure to the elements.` to trigger the image generation.
## Download model
[Download the *.safetensors LoRA](bobby97/step3_9a3ac6ce-a5b0-4d6a-8453-8200f746c606/tree/main) in the Files & versions tab.
## Use it with the [๐งจ diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained("black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('bobby97/step3_9a3ac6ce-a5b0-4d6a-8453-8200f746c606', weight_name='pytorch_lora_weights.safetensors')
image = pipeline('A close-up view of a weathered train rail, highlighting the bolts and metal connectors against a background of scattered leaves and dirt. The metal surface shows signs of wear and exposure to the elements.').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## License
Please adhere to the licensing terms as described [here](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Nataliia19-19/Nataliia | Nataliia19-19 | 2025-05-27T20:26:07Z | 0 | 0 | null | [
"license:other",
"region:us"
]
| null | 2025-05-27T19:33:57Z | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
--- |
dimasik2987/60c1f458-6f6f-40b6-afd3-94ce08aa8ba5 | dimasik2987 | 2025-05-27T20:26:01Z | 0 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"axolotl",
"dpo",
"trl",
"conversational",
"arxiv:2305.18290",
"base_model:NousResearch/Nous-Capybara-7B-V1",
"base_model:quantized:NousResearch/Nous-Capybara-7B-V1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-27T19:10:37Z | ---
base_model: NousResearch/Nous-Capybara-7B-V1
library_name: transformers
model_name: 60c1f458-6f6f-40b6-afd3-94ce08aa8ba5
tags:
- generated_from_trainer
- axolotl
- dpo
- trl
licence: license
---
# Model Card for 60c1f458-6f6f-40b6-afd3-94ce08aa8ba5
This model is a fine-tuned version of [NousResearch/Nous-Capybara-7B-V1](https://huggingface.co/NousResearch/Nous-Capybara-7B-V1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="dimasik2987/60c1f458-6f6f-40b6-afd3-94ce08aa8ba5", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-7/runs/arpdxtie)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.0.dev0
- Transformers: 4.46.0
- Pytorch: 2.5.0+cu124
- Datasets: 3.0.1
- Tokenizers: 0.20.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Yuhan123/ppo-cn-RM-reading-level-grad-1-steps-10000-epoch-999-best-eval-score-0.329 | Yuhan123 | 2025-05-27T20:24:19Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T20:22:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sarthak1/gte-Qwen2-7B-instruct-M2V-Distilled | sarthak1 | 2025-05-27T20:23:52Z | 14 | 1 | model2vec | [
"model2vec",
"safetensors",
"sentence-transformers",
"sentence-similarity",
"feature-extraction",
"transformers",
"Qwen2",
"base_model:Alibaba-NLP/gte-Qwen2-7B-instruct",
"base_model:finetune:Alibaba-NLP/gte-Qwen2-7B-instruct",
"license:apache-2.0",
"region:us"
]
| feature-extraction | 2025-05-25T19:24:32Z | ---
base_model: Alibaba-NLP/gte-Qwen2-7B-instruct
library_name: model2vec
license: apache-2.0
license_name: apache-2.0
license_link: LICENSE
model_name: gte-Qwen2-7B-instruct-M2V-Distilled
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- transformers
- Qwen2
---
# gte-Qwen2-7B-instruct-M2V-Distilled
This project optimizes the gte-Qwen2-7B-instruct model using Model2Vec, reducing its size and dramatically improving inference speed while maintaining most of its performance capabilities.
## Overview
[gte-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) is a state-of-the-art embedding model designed for retrieval tasks. While powerful, it can be resource-intensive for production use cases.
[Model2Vec](https://github.com/MinishLab/model2vec) is a technique to distill large sentence transformer models into small, fast static embedding models. This project applies Model2Vec to create an optimized version of gte-Qwen2-7B-instruct with the following benefits:
- **Smaller Size**: Reduces model size by a factor of 180x
- **Faster Inference**: Up to 15,021x faster inference
- **Low Resource Requirements**: Minimal memory footprint and dependencies
- **Maintains Performance**: Retains 86.56% of the original model's embedding similarity
## Model Information
- **Model Name**: gte-Qwen2-7B-instruct-M2V-Distilled
- **Original Model**: [gte-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct)
- **Distillation Method**: [Model2Vec](https://github.com/MinishLab/model2vec)
- **Original Dimensions**: 3584
- **Distilled Dimensions**: 256
- **Embedding Similarity**: 86.56% maintained with original model
- **Size Reduction**: 180x (from 28.7GB to 158.98MB)
- **Speed Improvement**: 15,021x faster (0.50 โ 7,549 texts/second)
## Installation
First, ensure you have the required dependencies:
```bash
# Install the base package
uv sync
```
## Usage
### Distillation
To create a distilled version of Alibaba-NLP/gte-Qwen2-7B-instruct:
```bash
uv run python distill.py
```
### Evaluation
To evaluate the distilled model against the original:
```bash
uv run python evaluate.py
```
### Training Code Classification
To train a programming language classifier using the distilled model on the CodeSearchNet dataset:
```bash
uv run python train_code_classification.py
```
This script:
- Uses the [CodeSearchNet dataset](https://github.com/github/CodeSearchNet) for training
- Trains a classifier to distinguish between 6 programming languages: Python, Java, JavaScript, Go, PHP, and Ruby
- Creates a `StaticModelForClassification` using the distilled model
- Evaluates the classifier and saves the trained model.
**Dataset Details:**
- **Source**: `code-search-net/code_search_net` from HuggingFace
- **Task**: Programming language classification
- **Languages**: Python, Java, JavaScript, Go, PHP, Ruby
- **Max samples per language**: 5,000 (for balanced training)
- **Code length range**: 50-2,000 characters
- **Features**: Function code strings with language labels
**Training Configuration:**
- **Max epochs**: 30 with early stopping (patience: 5)
- **Batch size**: 32
- **Learning rate**: 1e-3
- **Output**: Scikit-learn compatible pipeline saved to the root dir
## Results
The distilled model achieves remarkable performance improvements:
- **180x reduction in model size** (from 28.7GB to 158.98MB)
- **15,021x increase in inference speed** (0.50 โ 7,549 texts/second)
- **86.56% embedding similarity** maintained with the original model
- **14x dimensional reduction** (3584 โ 256 dimensions)
- **Significant memory efficiency** with minimal resource requirements
### Performance Visualizations
#### Model Size Comparison

*Dramatic reduction in model size from 28.7GB to 158.98MB*
#### Inference Speed Comparison

*15,021x faster inference speed: from 0.50 to 7,549 texts per second*
#### Memory Usage Comparison

*Significant reduction in memory footprint during inference*
#### Embedding Similarity Analysis

*High correlation (86.56%) between original and distilled model embeddings*
Detailed evaluation results, including similarity plots and performance metrics, are saved to the evaluation output directory.
## Project Structure
- `distill.py` - Script to create the distilled model
- `evaluate.py` - Script to compare performance with the original model
- `train_code_classification.py` - Script to train programming language classifier
- `MTEB_evaluate.py` - Script to evaluate model on MTEB benchmark tasks
- `evaluation/` - Directory containing evaluation results and visualizations
- `trained_code_classifier/` - Directory containing trained classification model
- `mteb_results/` - Directory containing MTEB evaluation results
## MTEB Benchmark Results (Partial)
**Overall Average Score: 0.1962**
| Category | Task | Score |
|----------|------|-------|
| **Classification** | **Average** | **0.4164** |
| | AmazonCounterfactualClassification | 0.5690 |
| | AmazonReviewsClassification | 0.2637 |
| | | |
| **Clustering** | **Average** | **0.0775** |
| | BiorxivClusteringS2S | 0.0775 |
| | | |
| **Reranking** | **Average** | **0.4643** |
| | AskUbuntuDupQuestions | 0.4643 |
| | | |
| **Retrieval** | **Average** | **0.1509** |
| | ArguAna | 0.1509 |
| | | |
| **CodeRetrieval** | **Average** | **0.1034** |
| | AppsRetrieval | 0.0008 |
| | COIRCodeSearchNetRetrieval | Failed |
| | CodeFeedbackMT | 0.1594 |
| | CodeSearchNetCCRetrieval | Failed |
| | CodeTransOceanContest | 0.0951 |
| | CodeTransOceanDL | 0.2780 |
| | CosQA | 0.0097 |
| | StackOverflowQA | 0.1762 |
| | SyntheticText2SQL | 0.0049 |
| | | |
| **STS** | **Average** | **0.3016** |
| | BIOSSES | 0.3016 |
| | | |
### Summary Statistics
- **Total Tasks**: 15
- **Successful Tasks**: 13
- **Failed Tasks**: 2
- **Overall Average**: 0.1962
### Category Averages
- **Classification**: 0.4164 (2 tasks)
- **Clustering**: 0.0775 (1 tasks)
- **Reranking**: 0.4643 (1 tasks)
- **Retrieval**: 0.1509 (1 tasks)
- **CodeRetrieval**: 0.1034 (7 tasks)
- **STS**: 0.3016 (1 tasks)
## Acknowledgments
This project is built upon the following technologies:
- [gte-Qwen2-7B-instruct](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct) - The original embedding model developed by Alibaba-NLP
- [Model2Vec](https://github.com/MinishLab/model2vec) - The distillation technique used to optimize the model
## License
This model is licensed under the [Apache 2.0](LICENSE) license, the same as the original gte-Qwen2-7B-instruct model.
|
wATCH-Katrina-Lim-Viral-Kiffy-Videoss/Katrina.Lim.Viral.Kiffy.Video.Tutorial.Viral.Full.Video.on.Social.Media.X | wATCH-Katrina-Lim-Viral-Kiffy-Videoss | 2025-05-27T20:22:30Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T20:22:03Z | <a data-target="animated-image.originalLink" rel="nofollow" href="https://shopihaaa2.blogspot.com/2025/01/sophie-rain.html"><img data-target="animated-image.originalImage" style="max-width: 100%; display: inline-block;" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif"></a>
<a rel="nofollow" href="https://shopihaaa2.blogspot.com/2025/01/sophie-rain.html">๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ</a>
<a rel="nofollow" href="https://shopihaaa2.blogspot.com/2025/01/sophie-rain.html">๐ด CLICK HERE ๐==โบโบ Download Now)</a> |
hdong0/Qwen2.5-Math-1.5B-batch-mix-Open-R1-GRPO_100steps_lr1e-6_acc_ | hdong0 | 2025-05-27T20:21:36Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2bm",
"text-generation",
"conversational",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
]
| text-generation | 2025-05-27T17:22:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Yuhan123/ppo-cn-RM-reading-level-grad-1-steps-10000-epoch-999-best-eval-score-0.217 | Yuhan123 | 2025-05-27T20:20:21Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T20:18:40Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aamijar/Llama-2-7b-hf-lora-r1024-boolq-portlora | aamijar | 2025-05-27T20:16:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T20:16:27Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
0xshaf/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bellowing_tiny_falcon | 0xshaf | 2025-05-27T20:15:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am bellowing tiny falcon",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T19:05:04Z | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bellowing_tiny_falcon
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am bellowing tiny falcon
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bellowing_tiny_falcon
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="0xshaf/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-bellowing_tiny_falcon", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.52.3
- Pytorch: 2.5.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
EdBerg/lora_model | EdBerg | 2025-05-27T20:14:45Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-04-25T22:50:15Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** EdBerg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
b6Amine/MNLP_M2_quantized_model | b6Amine | 2025-05-27T20:02:19Z | 0 | 0 | null | [
"safetensors",
"qwen3",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
]
| null | 2025-05-27T19:58:42Z | ---
license: apache-2.0
---
|
plumpyfield/natix-hot11 | plumpyfield | 2025-05-27T19:59:11Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-05-27T19:59:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
plumpyfield/natix-hot16 | plumpyfield | 2025-05-27T19:58:46Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-05-27T19:58:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
plumpyfield/natix-hot45 | plumpyfield | 2025-05-27T19:58:32Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-05-27T19:58:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
plumpyfield/natix-hot49 | plumpyfield | 2025-05-27T19:57:06Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-05-27T19:56:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
plumpyfield/natix-hot32 | plumpyfield | 2025-05-27T19:56:54Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-05-27T19:56:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
plumpyfield/natix-hot2 | plumpyfield | 2025-05-27T19:54:53Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-05-27T19:54:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
plumpyfield/natix-hot31 | plumpyfield | 2025-05-27T19:54:41Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-05-27T19:54:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
plumpyfield/natix-hot23 | plumpyfield | 2025-05-27T19:54:14Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-05-27T19:54:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
namnguyenba2003/VietnamLegalText-SBERT-finetuned | namnguyenba2003 | 2025-05-27T19:53:25Z | 0 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:23168",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"vi",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:keepitreal/vietnamese-sbert",
"base_model:finetune:keepitreal/vietnamese-sbert",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2025-05-27T19:45:43Z | ---
language:
- vi
license: apache-2.0
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:23168
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model:
- keepitreal/vietnamese-sbert
widget:
- source_sentence: >-
c) ฤแปi vแปi ฤฦกn vแป cรณ Quรขn kแปณ: Ngฦฐแปi trao gแบฏn Huรขn chฦฐฦกng (hoแบทc Huy chฦฐฦกng,
Huy hiแปu kรจm theo danh hiแปu) lรชn gรณc cao Quรขn kแปณ. Vแป trรญ gแบฏn Huรขn chฦฐฦกng
(hoแบทc Huy chฦฐฦกng, Huy hiแปu kรจm theo danh hiแปu) trรชn Quรขn kแปณ ฤฦฐแปฃc thแปฑc hiแปn
theo thแปฉ hแบกng tแปซ cao xuแปng thแบฅp; ฤแปi vแปi tแบญp thแป khรดng cรณ Quรขn kแปณ: Ngฦฐแปi
trao trao Bแบฑng ฤรฃ gแบฏn sแบตn Huรขn chฦฐฦกng (hoแบทc Huy chฦฐฦกng, Huy hiแปu kรจm theo
danh hiแปu) แป gรณc trรชn, bรชn trรกi cแปงa Bแบฑng nhรฌn tแปซ ngoร i vร o;
d) Trao tแบทng cho cรก nhรขn: Ngฦฐแปi trao gแบฏn Huรขn chฦฐฦกng (hoแบทc Huy chฦฐฦกng, Huy
hiแปu kรจm theo danh hiแปu) lรชn ngแปฑc รกo bรชn trรกi ngฦฐแปi ฤรณn nhแบญn, sau ฤรณ trao
Bแบฑng. Vแป trรญ gแบฏn Huรขn chฦฐฦกng (hoแบทc Huy chฦฐฦกng, Huy hiแปu kรจm theo danh hiแปu)
trรชn ngแปฑc รกo ฤฦฐแปฃc thแปฑc hiแปn theo thแปฉ hแบกng tแปซ cao xuแปng thแบฅp;
ฤ) Truy tแบทng: Ngฦฐแปi trao trao Bแบฑng ฤรฃ gแบฏn sแบตn Huรขn chฦฐฦกng (hoแบทc Huy chฦฐฦกng,
Huy hiแปu kรจm theo danh hiแปu) cho ฤแบกi diแปn gia ฤรฌnh cรก nhรขn ฤฦฐแปฃc truy tแบทng.
3. ฤรณn nhแบญn hรฌnh thแปฉc khen thฦฐแปng, danh hiแปu thi ฤua:
sentences:
- >-
Theo quy ฤแปnh, Vแปฅ Tแป chแปฉc cรกn bแป Bแป Tฦฐ phรกp sแบฝ trรฌnh Bแป trฦฐแปng Bแป Tฦฐ phรกp
quyแบฟt ฤแปnh giao quyแปn cแบฅp trฦฐแปng hoแบทc giao phแปฅ trรกch Tแปng cแปฅc Thi hร nh รกn
dรขn sแปฑ dแปฑa trรชn cฦก sแป nร o?
- >-
Khi truy tแบทng Huรขn chฦฐฦกng, Huy chฦฐฦกng, Huy hiแปu, ngฦฐแปi trao sแบฝ trao Bแบฑng nhฦฐ
thแบฟ nร o cho ฤแบกi diแปn gia ฤรฌnh cรก nhรขn ฤฦฐแปฃc truy tแบทng?
- >-
Nแบฟu mแปt ฤฦกn vแป bแป kiแปm tra, thแปญ nghiแปm, khแบฃo sรกt, ฤiแปu tra phรกt hiแปn sฦก hแป,
thiแบฟu sรณt, hแป phแบฃi lร m gรฌ trong vรฒng 10 ngร y lร m viแปc kแป tแปซ khi nhแบญn ฤฦฐแปฃc
kแบฟt luแบญn?
- source_sentence: >-
Khoแบฃn 2. Nแปi dung Bรกo cรกo kแบฟt quแบฃ kiแปm kรช ฤแบฅt ฤai bao gแปm:
a) Tรฌnh hรฌnh tแป chแปฉc thแปฑc hiแปn; phฦฐฦกng phรกp ฤiแปu tra, thu thแบญp sแป liแปu kiแปm
kรช ฤแบฅt ฤai, nguแปn gแปc sแป liแปu thu thแบญp tแบกi cแบฅp xรฃ vร ฤรกnh giรก ฤแป tin cแบญy cแปงa
sแป liแปu thu thแบญp vร sแป liแปu tแปng hแปฃp; cรกc thรดng tin khรกc cรณ liรชn quan ฤแบฟn sแป
liแปu; nguแปn tร i liแปu vร phฦฐฦกng phรกp lแบญp bแบฃn ฤแป hiแปn trแบกng sแปญ dแปฅng ฤแบฅt;
b) Phรขn tรญch, ฤรกnh giรก hiแปn trแบกng sแปญ dแปฅng ฤแบฅt theo cรกc chแป tiรชu kiแปm kรช;
ฤรกnh giรก tรฌnh hรฌnh biแบฟn ฤแปng vร phรขn tรญch nguyรชn nhรขn biแบฟn ฤแปng vแป sแปญ dแปฅng
ฤแบฅt giแปฏa nฤm kiแปm kรช vแปi sแป liแปu cแปงa 02 kแปณ kiแปm kรช gแบงn nhแบฅt; ฤรกnh giรก tรฌnh
hรฌnh thแปฑc hiแปn quy hoแบกch, kแบฟ hoแบกch chuyแปn mแปฅc ฤรญch sแปญ dแปฅng ฤแบฅt trong kแปณ kiแปm
kรช ฤแบฅt ฤai; tรฌnh hรฌnh giao ฤแบฅt, cho thuรช ฤแบฅt, cho phรฉp chuyแปn mแปฅc ฤรญch sแปญ
dแปฅng ฤแบฅt nhฦฐng chฦฐa thแปฑc hiแปn; tรฌnh hรฌnh vร nguyรชn nhรขn chuyแปn mแปฅc ฤรญch sแปญ
dแปฅng ฤแบฅt khรกc vแปi hแป sฦก ฤแปa chรญnh; tรฌnh hรฌnh chuyแปn ฤแปi cฦก cแบฅu ฤแบฅt trแปng
lรบa; tรฌnh hรฌnh ฤแบฅt ngแบญp nฦฐแปc; tรฌnh hรฌnh tranh chแบฅp, giแบฃi quyแบฟt tranh chแบฅp
ฤแปa giแปi hร nh chรญnh thแปฑc hiแปn trong kแปณ kiแปm kรช (nแบฟu cรณ);
c) ฤแป xuแบฅt, kiแบฟn nghแป biแปn phรกp tฤng cฦฐแปng quแบฃn lรฝ, sแปญ dแปฅng ฤแบฅt ฤai.
sentences:
- >-
Ngฦฐแปi muแปn ฤฦฐแปฃc cแบฅp giแบฅy phรฉp kiแปm soรกt an ninh cแบฃng hร ng khรดng, sรขn bay cรณ
giรก trแป sแปญ dแปฅng ngแบฏn hแบกn cแบงn phแบฃi lร m nhแปฏng thแปง tแปฅc gรฌ?
- >-
Phรกp luแบญt quy ฤแปnh cรกc nแปi dung nร o cแบงn ฤฦฐแปฃc phรขn tรญch, ฤรกnh giรก trong bรกo
cรกo kแบฟt quแบฃ kiแปm kรช ฤแบฅt ฤai vแป tรฌnh hรฌnh biแบฟn ฤแปng sแปญ dแปฅng ฤแบฅt vร thแปฑc hiแปn
quy hoแบกch, kแบฟ hoแบกch chuyแปn mแปฅc ฤรญch sแปญ dแปฅng ฤแบฅt?
- >-
Theo quy ฤแปnh cแปงa phรกp luแบญt, Ngรขn hร ng Nhร nฦฐแปc Viแปt Nam phแบฃi quแบฃn lรฝ vร ghi
chรฉp nhฦฐ thแบฟ nร o ฤแปi vแปi sแป tiแปn cotton, polymer vร kim loแบกi ฤรฃ ฤฦฐแปฃc in, ฤรบc
nhฦฐng chฦฐa ฤฦฐแปฃc phรฉp lฦฐu hร nh?
- source_sentence: >-
ฤiแปu 85. Khu vแปฑc cแบฅm bay, khu vแปฑc hแบกn chแบฟ bay
1. Khu vแปฑc cแบฅm bay lร khu vแปฑc trรชn khรดng cรณ kรญch thฦฐแปc xรกc ฤแปnh mร tร u bay
khรดng ฤฦฐแปฃc bay vร o, trแปซ trฦฐแปng hแปฃp tร u bay cรดng vแปฅ Viแปt Nam ฤang thแปฑc hiแปn
cรดng vแปฅ. Khu vแปฑc hแบกn chแบฟ bay lร khu vแปฑc trรชn khรดng cรณ kรญch thฦฐแปc xรกc ฤแปnh mร
tร u bay chแป ฤฦฐแปฃc phรฉp hoแบกt ฤแปng tแบกi khu vแปฑc ฤรณ khi ฤรกp แปฉng cรกc ฤiแปu kiแปn cแปฅ
thแป.
2. Thแปง tฦฐแปng Chรญnh phแปง quyแบฟt ฤแปnh thiแบฟt lแบญp khu vแปฑc cแบฅm bay, khu vแปฑc hแบกn chแบฟ
bay trong lรฃnh thแป Viแปt Nam nhแบฑm mแปฅc ฤรญch bแบฃo ฤแบฃm quแปc phรฒng, an ninh, an
toร n xรฃ hแปi. Trong trฦฐแปng hแปฃp ฤแบทc biแปt vรฌ lรฝ do quแปc phรฒng, an ninh, Bแป Quแปc
phรฒng quyแบฟt ฤแปnh hแบกn chแบฟ bay tแบกm thแปi hoแบทc cแบฅm bay tแบกm thแปi tแบกi mแปt hoแบทc mแปt
sแป khu vแปฑc trong lรฃnh thแป Viแปt Nam; quyแบฟt ฤแปnh nร y cรณ hiแปu lแปฑc ngay.
3. Bแป Quแปc phรฒng quy ฤแปnh viแปc quแบฃn lรฝ khu vแปฑc cแบฅm bay vร khu vแปฑc hแบกn chแบฟ
bay.
sentences:
- >-
Phรกp luแบญt quy ฤแปnh nhแปฏng khoแบฃn phแปฅ cแบฅp, trแปฃ cแบฅp nร o ฤฦฐแปฃc miแป
n thuแบฟ thu nhแบญp
cรก nhรขn?
- >-
Trฦฐแปng hแปฃp ngฦฐแปi hแปc vแบฏng mแบทt trong kแปณ kiแปm tra cรณ lรฝ do chรญnh ฤรกng, hแป sแบฝ
ฤฦฐแปฃc cฦก sแป ฤร o tแบกo sแบฏp xแบฟp kiแปm tra lแบกi nhฦฐ thแบฟ nร o?
- >-
Luแบญt hร ng khรดng dรขn dแปฅng Viแปt Nam quy ฤแปnh nhแปฏng trฦฐแปng hแปฃp nร o tร u bay ฤฦฐแปฃc
phรฉp bay vร o khu vแปฑc cแบฅm bay?
- source_sentence: >-
ฤiแปu 62. Cรกc trฦฐแปng hแปฃp khรดng phแบฃi bแปi thฦฐแปng thiแปt hแบกi
1. Ngฦฐแปi sแบฃn xuแบฅt, ngฦฐแปi nhแบญp khแบฉu khรดng phแบฃi bแปi thฦฐแปng trong cรกc trฦฐแปng
hแปฃp sau ฤรขy:
a) Ngฦฐแปi bรกn hร ng bรกn hร ng hรณa ฤรฃ hแบฟt hแบกn sแปญ dแปฅng; ngฦฐแปi tiรชu dรนng sแปญ dแปฅng
hร ng hรณa ฤรฃ hแบฟt hแบกn sแปญ dแปฅng;
b) ฤรฃ hแบฟt thแปi hiแปu khiแบฟu nแบกi, khแปi kiแปn;
c) ฤรฃ thรดng bรกo thu hแปi hร ng hรณa cรณ khuyแบฟt tแบญt ฤแบฟn ngฦฐแปi bรกn hร ng, ngฦฐแปi sแปญ
dแปฅng trฦฐแปc thแปi ฤiแปm hร ng hรณa gรขy thiแปt hแบกi;
d) Sแบฃn phแบฉm, hร ng hรณa cรณ khuyแบฟt tแบญt do tuรขn thแปง quy ฤแปnh bแบฏt buแปc cแปงa cฦก
quan nhร nฦฐแปc cรณ thแบฉm quyแปn;
ฤ) Trรฌnh ฤแป khoa hแปc, cรดng nghแป cแปงa thแบฟ giแปi chฦฐa ฤแปง ฤแป phรกt hiแปn khแบฃ nฤng
gรขy mแบฅt an toร n cแปงa sแบฃn phแบฉm tรญnh ฤแบฟn thแปi ฤiแปm hร ng hรณa gรขy thiแปt hแบกi;
e) Thiแปt hแบกi phรกt sinh do lแปi cแปงa ngฦฐแปi bรกn hร ng;
g) Thiแปt hแบกi phรกt sinh do lแปi cแปงa ngฦฐแปi mua, ngฦฐแปi tiรชu dรนng.
2. Ngฦฐแปi bรกn hร ng khรดng phแบฃi bแปi thฦฐแปng cho ngฦฐแปi mua, ngฦฐแปi tiรชu dรนng trong
cรกc trฦฐแปng hแปฃp sau ฤรขy:
sentences:
- >-
Luแบญt quy ฤแปnh nhแปฏng trฦฐแปng hแปฃp nร o thรฌ ngฦฐแปi sแบฃn xuแบฅt, ngฦฐแปi nhแบญp khแบฉu khรดng
phแบฃi bแปi thฦฐแปng thiแปt hแบกi cho ngฦฐแปi tiรชu dรนng?
- >-
Ngฦฐแปi ฤรฃ hiแบฟn bแป phแบญn cฦก thแป sแบฝ ฤฦฐแปฃc nhแบญn nhแปฏng phแบงn thฦฐแปng, ฦฐu ฤรฃi gรฌ tแปซ Bแป
Y tแบฟ?
- >-
Theo quy ฤแปnh, KBNN cรณ quyแปn tแปซ chแปi thanh toรกn, chi trแบฃ cรกc khoแบฃn chi bแบฑng
tiแปn mแบทt trong nhแปฏng trฦฐแปng hแปฃp nร o vร KBNN chแปu trรกch nhiแปm gรฌ trong cรกc
trฦฐแปng hแปฃp tแปซ chแปi thanh toรกn?
- source_sentence: >-
k) Thร nh viรชn phแบฃi duy trรฌ sแป dฦฐ tร i khoแบฃn thanh toรกn bแบฃo ฤแบฃm thแปฑc hiแปn cรกc
Lแปnh thanh toรกn vร quyแบฟt toรกn bรน trแปซ qua Hแป thแปng TTLNH;
l) Trฦฐแปng hแปฃp thร nh viรชn, ฤฦกn vแป thร nh viรชn chแบฅm dแปฉt tฦฐ cรกch thร nh viรชn, ฤฦกn
vแป thร nh viรชn, phแบฃi thแปฑc hiแปn thแปง tแปฅc ฤแป nghแป thu hแปi chแปฉng thฦฐ sแป (nแบฟu cรณ)
sแปญ dแปฅng trong TTLNH theo quy ฤแปnh tแบกi Thรดng tฦฐ vแป viแปc quแบฃn lรฝ, sแปญ dแปฅng chแปฏ
kรฝ sแป, chแปฉng thฦฐ sแป vร dแปch vแปฅ chแปฉng thแปฑc chแปฏ kรฝ sแป cแปงa Ngรขn hร ng Nhร nฦฐแปc;
m) ฤแบฃm bแบฃo, duy trรฌ hแบก tแบงng kแปน thuแบญt vร nguแปn lแปฑc quy ฤแปnh tแบกi ฤiแปm c, d
Khoแบฃn 1 vร ฤiแปm a, b Khoแบฃn 3 ฤiแปu 40 Thรดng tฦฐ nร y;
n) ฤฤng kรฝ danh sรกch ฤแปa chแป hแปp thฦฐ ฤiแปn tแปญ ฤแป trao ฤแปi cรกc thรดng tin liรชn
quan ฤแบฟn Hแป thแปng TTLNH ฤฦฐแปฃc quy ฤแปnh trao ฤแปi qua thฦฐ ฤiแปn tแปญ tแบกi Thรดng tฦฐ
nร y;
o) Chแบฅp hร nh ฤรบng cรกc quy ฤแปnh vแป thแปi ฤiแปm รกp dแปฅng trong Hแป thแปng TTLNH ฤแป
bแบฃo ฤแบฃm thanh toรกn ฤฦฐแปฃc thแปฑc hiแปn thuแบญn lแปฃi, chรญnh xรกc, kแปp thแปi vร an toร n
tร i sแบฃn;
p) Thร nh viรชn phแบฃi thฦฐแปng xuyรชn giรกm sรกt hแบกn mแปฉc nแปฃ rรฒng hiแปn thแปi cแปงa mรฌnh
ฤแป duy trรฌ แป mแปฉc thรญch hแปฃp;
sentences:
- >-
Cรกc thร nh viรชn, ฤฦกn vแป thร nh viรชn cแปงa Hแป thแปng Thanh toรกn ฤiแปn tแปญ liรชn ngรขn
hร ng Quแปc gia phแบฃi ฤแบฃm bแบฃo vร duy trรฌ nhแปฏng hแบก tแบงng kแปน thuแบญt vร nguแปn lแปฑc
gรฌ?
- >-
Bแป Quแปc phรฒng quy ฤแปnh nhฦฐ thแบฟ nร o vแป viแปc ฤiแปu chแปnh tแปท lแป khแบฅu hao tร i sแบฃn
cแป ฤแปnh ฤแป ฤแบฃm bแบฃo phรน hแปฃp vแปi lแป trรฌnh tรญnh giรก dแปch vแปฅ sแปฑ nghiแปp cรดng?
- >-
Nแบฟu sแปฑ cแป bแปฉc xแบก, hแบกt nhรขn xแบฃy ra vฦฐแปฃt quรก khแบฃ nฤng แปฉng phรณ cแปงa ฤแปa phฦฐฦกng,
Bแป Quแปc phรฒng sแบฝ hแป trแปฃ nhฦฐ thแบฟ nร o?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: VietnamLegalText-SBERT-finetuned
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.5425242718446602
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5844660194174758
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6900970873786407
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.785242718446602
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5425242718446602
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.5118446601941747
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.3738252427184466
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.22287378640776698
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.20184835876098012
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.543875173370319
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6517078132223763
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7707680690399137
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6604654137474918
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5960436122669122
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6377378134182617
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.5316504854368932
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5755339805825243
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.686990291262136
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7836893203883495
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5316504854368932
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.5021359223300971
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.3706407766990291
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.22190291262135922
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.19825612575127138
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.5336421636615812
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.645921405455386
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7684012944983818
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6541937764554878
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5873643088303279
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6299342948833029
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.5161165048543689
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5693203883495146
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6811650485436893
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7794174757281553
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.5161165048543689
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.49022653721682846
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.3660582524271845
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.2215145631067961
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.193375866851595
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.52260009246417
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6386981044845123
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7651132686084142
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6463553546004565
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5755563260903059
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.6201771053919043
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 128
type: dim_128
metrics:
- type: cosine_accuracy@1
value: 0.4955339805825243
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.541747572815534
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6446601941747573
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7596116504854369
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.4955339805825243
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.46873786407766993
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.34679611650485437
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.21421359223300973
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.18586777623670828
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.4998557558945908
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.6054748035136385
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.7409255663430421
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6217574657696157
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5523821852365535
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5975578085238775
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 64
type: dim_64
metrics:
- type: cosine_accuracy@1
value: 0.4520388349514563
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.5013592233009708
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.6027184466019417
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.7157281553398058
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.4520388349514563
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.4311974110032362
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.322873786407767
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.20015533980582523
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.16861488673139158
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.4591844660194174
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.5654341192787794
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.6956985668053629
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.5771461257381547
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5088779472954221
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.5556917574193598
name: Cosine Map@100
---
# VietnamLegalText-SBERT-finetuned
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [hmthanh/VietnamLegalText-SBERT](https://huggingface.co/hmthanh/VietnamLegalText-SBERT) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [hmthanh/VietnamLegalText-SBERT](https://huggingface.co/hmthanh/VietnamLegalText-SBERT) <!-- at revision de8273cd79aaae2ffd642f411b788e4a04971530 -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- json
- **Language:** vi
- **License:** apache-2.0
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the ๐ค Hub
model = SentenceTransformer("namnguyenba2003/VietnamLegalText-SBERT-finetuned")
# Run inference
sentences = [
'k) Thร nh viรชn phแบฃi duy trรฌ sแป dฦฐ tร i khoแบฃn thanh toรกn bแบฃo ฤแบฃm thแปฑc hiแปn cรกc Lแปnh thanh toรกn vร quyแบฟt toรกn bรน trแปซ qua Hแป thแปng TTLNH;\nl) Trฦฐแปng hแปฃp thร nh viรชn, ฤฦกn vแป thร nh viรชn chแบฅm dแปฉt tฦฐ cรกch thร nh viรชn, ฤฦกn vแป thร nh viรชn, phแบฃi thแปฑc hiแปn thแปง tแปฅc ฤแป nghแป thu hแปi chแปฉng thฦฐ sแป (nแบฟu cรณ) sแปญ dแปฅng trong TTLNH theo quy ฤแปnh tแบกi Thรดng tฦฐ vแป viแปc quแบฃn lรฝ, sแปญ dแปฅng chแปฏ kรฝ sแป, chแปฉng thฦฐ sแป vร dแปch vแปฅ chแปฉng thแปฑc chแปฏ kรฝ sแป cแปงa Ngรขn hร ng Nhร nฦฐแปc;\nm) ฤแบฃm bแบฃo, duy trรฌ hแบก tแบงng kแปน thuแบญt vร nguแปn lแปฑc quy ฤแปnh tแบกi ฤiแปm c, d Khoแบฃn 1 vร ฤiแปm a, b Khoแบฃn 3 ฤiแปu 40 Thรดng tฦฐ nร y;\nn) ฤฤng kรฝ danh sรกch ฤแปa chแป hแปp thฦฐ ฤiแปn tแปญ ฤแป trao ฤแปi cรกc thรดng tin liรชn quan ฤแบฟn Hแป thแปng TTLNH ฤฦฐแปฃc quy ฤแปnh trao ฤแปi qua thฦฐ ฤiแปn tแปญ tแบกi Thรดng tฦฐ nร y;\no) Chแบฅp hร nh ฤรบng cรกc quy ฤแปnh vแป thแปi ฤiแปm รกp dแปฅng trong Hแป thแปng TTLNH ฤแป bแบฃo ฤแบฃm thanh toรกn ฤฦฐแปฃc thแปฑc hiแปn thuแบญn lแปฃi, chรญnh xรกc, kแปp thแปi vร an toร n tร i sแบฃn;\np) Thร nh viรชn phแบฃi thฦฐแปng xuyรชn giรกm sรกt hแบกn mแปฉc nแปฃ rรฒng hiแปn thแปi cแปงa mรฌnh ฤแป duy trรฌ แป mแปฉc thรญch hแปฃp;',
'Cรกc thร nh viรชn, ฤฦกn vแป thร nh viรชn cแปงa Hแป thแปng Thanh toรกn ฤiแปn tแปญ liรชn ngรขn hร ng Quแปc gia phแบฃi ฤแบฃm bแบฃo vร duy trรฌ nhแปฏng hแบก tแบงng kแปน thuแบญt vร nguแปn lแปฑc gรฌ?',
'Bแป Quแปc phรฒng quy ฤแปnh nhฦฐ thแบฟ nร o vแป viแปc ฤiแปu chแปnh tแปท lแป khแบฅu hao tร i sแบฃn cแป ฤแปnh ฤแป ฤแบฃm bแบฃo phรน hแปฃp vแปi lแป trรฌnh tรญnh giรก dแปch vแปฅ sแปฑ nghiแปp cรดng?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Datasets: `dim_768`, `dim_512`, `dim_256`, `dim_128` and `dim_64`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
|:--------------------|:-----------|:-----------|:-----------|:-----------|:-----------|
| cosine_accuracy@1 | 0.5425 | 0.5317 | 0.5161 | 0.4955 | 0.452 |
| cosine_accuracy@3 | 0.5845 | 0.5755 | 0.5693 | 0.5417 | 0.5014 |
| cosine_accuracy@5 | 0.6901 | 0.687 | 0.6812 | 0.6447 | 0.6027 |
| cosine_accuracy@10 | 0.7852 | 0.7837 | 0.7794 | 0.7596 | 0.7157 |
| cosine_precision@1 | 0.5425 | 0.5317 | 0.5161 | 0.4955 | 0.452 |
| cosine_precision@3 | 0.5118 | 0.5021 | 0.4902 | 0.4687 | 0.4312 |
| cosine_precision@5 | 0.3738 | 0.3706 | 0.3661 | 0.3468 | 0.3229 |
| cosine_precision@10 | 0.2229 | 0.2219 | 0.2215 | 0.2142 | 0.2002 |
| cosine_recall@1 | 0.2018 | 0.1983 | 0.1934 | 0.1859 | 0.1686 |
| cosine_recall@3 | 0.5439 | 0.5336 | 0.5226 | 0.4999 | 0.4592 |
| cosine_recall@5 | 0.6517 | 0.6459 | 0.6387 | 0.6055 | 0.5654 |
| cosine_recall@10 | 0.7708 | 0.7684 | 0.7651 | 0.7409 | 0.6957 |
| **cosine_ndcg@10** | **0.6605** | **0.6542** | **0.6464** | **0.6218** | **0.5771** |
| cosine_mrr@10 | 0.596 | 0.5874 | 0.5756 | 0.5524 | 0.5089 |
| cosine_map@100 | 0.6377 | 0.6299 | 0.6202 | 0.5976 | 0.5557 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### json
* Dataset: json
* Size: 23,168 training samples
* Columns: <code>positive</code> and <code>anchor</code>
* Approximate statistics based on the first 1000 samples:
| | positive | anchor |
|:--------|:--------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 107 tokens</li><li>mean: 212.31 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 19 tokens</li><li>mean: 38.3 tokens</li><li>max: 140 tokens</li></ul> |
* Samples:
| positive | anchor |
|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Khoแบฃn 1. Trฦฐแปng hแปฃp ฤแป xuแบฅt dแปฑ รกn thuแปc quy mรด nhรณm C<br>a) Nhร ฤแบงu tฦฐ gแปญi ฤแป xuแบฅt dแปฑ รกn tแปi ฤฦกn vแป ฤแบงu mแปi quแบฃn lรฝ hoแบกt ฤแปng PPP.<br>b) Trong vรฒng 05 ngร y lร m viแปc kแป tแปซ ngร y nhแบญn ฤฦฐแปฃc hแป sฦก, ฤฦกn vแป ฤแบงu mแปi quแบฃn lรฝ hoแบกt ฤแปng PPP kiแปm tra hแป sฦก vร yรชu cแบงu nhร ฤแบงu tฦฐ bแป sung nแบฟu hแป sฦก chฦฐa ฤแบงy ฤแปง, hแปฃp lแป.<br>c) Trong vรฒng 20 ngร y lร m viแปc kแป tแปซ ngร y nhแบญn ฤฦฐแปฃc hแป sฦก ฤแบงy ฤแปง vร hแปฃp lแป, ฤฦกn vแป ฤแบงu mแปi quแบฃn lรฝ hoแบกt ฤแปng PPP tแป chแปฉc thแบฉm ฤแปnh ฤแป xuแบฅt dแปฑ รกn.<br>d) Trong vรฒng 05 ngร y lร m viแปc kแป tแปซ ngร y cรณ kแบฟt luแบญn thแบฉm ฤแปnh, ฤฦกn vแป ฤแบงu mแปi quแบฃn lรฝ hoแบกt ฤแปng PPP trรฌnh Bแป trฦฐแปng Bแป Cรดng Thฦฐฦกng phรช duyแปt. Trฦฐแปng hแปฃp kแบฟt luแบญn thแบฉm ฤแปnh khรดng thรดng qua ฤแป xuแบฅt dแปฑ รกn, ฤฦกn vแป ฤแบงu mแปi quแบฃn lรฝ hoแบกt ฤแปng PPP thรดng bรกo bแบฑng vฤn bแบฃn tแปi nhร ฤแบงu tฦฐ ฤแป xuแบฅt dแปฑ รกn vร nรชu rรต lรฝ do.</code> | <code>ฤฦกn vแป ฤแบงu mแปi quแบฃn lรฝ hoแบกt ฤแปng PPP cรณ nhแปฏng trรกch nhiแปm gรฌ trong quรก trรฌnh thแบฉm ฤแปnh vร phรช duyแปt ฤแป xuแบฅt dแปฑ รกn cแปงa nhร ฤแบงu tฦฐ?</code> |
| <code>ฤiแปu 11. Bรกo cรกo kแบฟt quแบฃ thแบฉm ฤแปnh giรก, Bรกo cรกo kแบฟt quแบฃ xรกc ฤแปnh giรก trแป tร i sแบฃn<br>1. Doanh nghiแปp thแบฉm ฤแปnh giรก cรณ trรกch nhiแปm cung cแบฅp Chแปฉng thฦฐ thแบฉm ฤแปnh giรก vร Bรกo cรกo kแบฟt quแบฃ thแบฉm ฤแปnh giรก theo quy ฤแปnh cแปงa Hแป thแปng tiรชu chuแบฉn thแบฉm ฤแปnh giรก Viแปt Nam.<br>2. Tแป chแปฉc cรณ chแปฉc nฤng tฦฐ vแบฅn vแป giรก cรณ trรกch nhiแปm lแบญp Bรกo cรกo kแบฟt quแบฃ xรกc ฤแปnh giรก trแป tร i sแบฃn theo Mแบซu tแบกi Phแปฅ lแปฅc kรจm theo Thรดng tฦฐ nร y.<br>3. Bรกo cรกo kแบฟt quแบฃ thแบฉm ฤแปnh giรก vร Bรกo cรกo kแบฟt quแบฃ xรกc ฤแปnh giรก trแป tร i sแบฃn phแบฃi phแบฃn รกnh trung thแปฑc, khรกch quan quรก trรฌnh vร kแบฟt quแบฃ xรกc ฤแปnh giรก tร i sแบฃn vร lร mแปt cฤn cแปฉ quan trแปng ฤแป cฦก quan quแบฃn lรฝ nhiแปm vแปฅ khoa hแปc vร cรดng nghแป trรฌnh cฦก quan cรณ thแบฉm quyแปn xem xรฉt, phรช duyแปt giรก trแป cแปงa tร i sแบฃn lร kแบฟt quแบฃ cแปงa nhiแปm vแปฅ khoa hแปc vร cรดng nghแป.</code> | <code>Doanh nghiแปp thแบฉm ฤแปnh giรก cรณ nhแปฏng trรกch nhiแปm gรฌ khi thแปฑc hiแปn thแบฉm ฤแปnh giรก tร i sแบฃn lร kแบฟt quแบฃ cแปงa nhiแปm vแปฅ khoa hแปc vร cรดng nghแป?</code> |
| <code>e) Hแป tรชn, nฤm sinh, nฦกi cฦฐ trรบ cแปงa phแบกm nhรขn;<br>g) Lรฝ do ฤฦฐแปฃc tแบกm ฤรฌnh chแป chแบฅp hร nh รกn phแบกt tรน;<br>h) Tรชn cฦก quan thi hร nh รกn hรฌnh sแปฑ, แปฆy ban nhรขn dรขn cแบฅp xรฃ, ฤฦกn vแป quรขn ฤแปi ฤฦฐแปฃc giao quแบฃn lรฝ ngฦฐแปi ฤฦฐแปฃc tแบกm ฤรฌnh chแป. Trฦฐแปng hแปฃp ngฦฐแปi ฤฦฐแปฃc tแบกm ฤรฌnh chแป bแป bแปnh nแบทng ฤang phแบฃi ฤiแปu trแป tแบกi bแปnh viแปn mร phแบฃi giao cho thรขn nhรขn chฤm sรณc thรฌ ghi thรชm hแป tรชn, nฦกi cฦฐ trรบ cแปงa thรขn nhรขn vร mแปi quan hแป giแปฏa hแป;<br>i) Thแปi hแบกn tแบกm ฤรฌnh chแป chแบฅp hร nh รกn phแบกt tรน vร hiแปu lแปฑc thi hร nh.</code> | <code>Thแปi hแบกn tแบกm ฤรฌnh chแป chแบฅp hร nh รกn phแบกt tรน vร thแปi ฤiแปm quyแบฟt ฤแปnh cรณ hiแปu lแปฑc thi hร nh ฤฦฐแปฃc quy ฤแปnh nhฦฐ thแบฟ nร o?</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: epoch
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `fp16`: True
- `dataloader_num_workers`: 4
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
- `ddp_find_unused_parameters`: False
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: epoch
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 16
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 4
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `tp_size`: 0
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: False
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
|:----------:|:-------:|:-------------:|:----------------------:|:----------------------:|:----------------------:|:----------------------:|:---------------------:|
| 0.2210 | 10 | 202.143 | - | - | - | - | - |
| 0.4420 | 20 | 59.6662 | - | - | - | - | - |
| 0.6630 | 30 | 28.2853 | - | - | - | - | - |
| 0.8840 | 40 | 17.9881 | - | - | - | - | - |
| 1.0 | 46 | - | 0.6067 | 0.6029 | 0.5918 | 0.5690 | 0.5172 |
| 1.0884 | 50 | 12.2072 | - | - | - | - | - |
| 1.3094 | 60 | 9.2488 | - | - | - | - | - |
| 1.5304 | 70 | 8.6885 | - | - | - | - | - |
| 1.7514 | 80 | 8.8927 | - | - | - | - | - |
| 1.9724 | 90 | 7.7438 | - | - | - | - | - |
| 2.0 | 92 | - | 0.6467 | 0.6451 | 0.6323 | 0.6056 | 0.5596 |
| 2.1768 | 100 | 6.1924 | - | - | - | - | - |
| 2.3978 | 110 | 6.3728 | - | - | - | - | - |
| 2.6188 | 120 | 5.7702 | - | - | - | - | - |
| 2.8398 | 130 | 5.0061 | - | - | - | - | - |
| 3.0 | 138 | - | 0.6560 | 0.6502 | 0.6445 | 0.6196 | 0.5736 |
| 3.0442 | 140 | 5.6389 | - | - | - | - | - |
| 3.2652 | 150 | 5.1059 | - | - | - | - | - |
| 3.4862 | 160 | 5.1945 | - | - | - | - | - |
| 3.7072 | 170 | 5.0158 | - | - | - | - | - |
| **3.9282** | **180** | **5.092** | **0.6605** | **0.6542** | **0.6464** | **0.6218** | **0.5771** |
* The bold row denotes the saved checkpoint.
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.51.3
- PyTorch: 2.6.0+cu124
- Accelerate: 1.5.2
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
Priyanship/base_sami_22k_ftallpseudo_ftlabelled_sami_parliament_alld0 | Priyanship | 2025-05-27T19:53:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2025-05-27T18:39:04Z | ---
library_name: transformers
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: base_sami_22k_ftallpseudo_ftlabelled_sami_parliament_alld0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# base_sami_22k_ftallpseudo_ftlabelled_sami_parliament_alld0
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 189.4120
- Wer: 0.3890
- Cer: 0.1332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.25
- num_epochs: 60.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 543.4487 | 1.0 | 446 | 189.4152 | 0.3905 | 0.1331 |
| 468.607 | 2.0 | 892 | 195.9747 | 0.3961 | 0.1389 |
| 431.6385 | 3.0 | 1338 | 213.4495 | 0.4097 | 0.1307 |
| 423.9342 | 4.0 | 1784 | 255.1584 | 0.4358 | 0.1790 |
| 428.2925 | 5.0 | 2230 | 242.8004 | 0.4474 | 0.1565 |
| 446.372 | 6.0 | 2676 | 294.5761 | 0.4721 | 0.1664 |
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1
- Datasets 3.2.0
- Tokenizers 0.21.0
|
plumpyfield/natix-hot37 | plumpyfield | 2025-05-27T19:53:11Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-05-27T19:53:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
plumpyfield/natix-hot52 | plumpyfield | 2025-05-27T19:51:57Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-05-27T19:51:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
plumpyfield/natix-hot47 | plumpyfield | 2025-05-27T19:50:52Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-05-27T19:50:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
plumpyfield/natix-hot39 | plumpyfield | 2025-05-27T19:50:14Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-05-27T19:50:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Blinorot/MNLP_M2_dpo_model | Blinorot | 2025-05-27T19:49:56Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"alignment-handbook",
"trl",
"dpo",
"conversational",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"arxiv:2305.18290",
"base_model:Blinorot/qwen3-06.B-sft",
"base_model:finetune:Blinorot/qwen3-06.B-sft",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T19:49:02Z | ---
base_model: Blinorot/qwen3-06.B-sft
datasets:
- HuggingFaceH4/ultrafeedback_binarized
library_name: transformers
model_name: qwen3-06.B-dpo
tags:
- generated_from_trainer
- alignment-handbook
- trl
- dpo
licence: license
---
# Model Card for qwen3-06.B-dpo
This model is a fine-tuned version of [Blinorot/qwen3-06.B-sft](https://huggingface.co/Blinorot/qwen3-06.B-sft) on the [['HuggingFaceH4/ultrafeedback_binarized']](https://huggingface.co/datasets/['HuggingFaceH4/ultrafeedback_binarized']) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Blinorot/qwen3-06.B-dpo", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/blinorot/huggingface/runs/d5yfm6sl)
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.17.0
- Transformers: 4.51.3
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
IamJunhee/Gemma3-Agricsense_lora | IamJunhee | 2025-05-27T19:48:59Z | 111 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"endpoints_compatible",
"region:us"
]
| null | 2025-04-18T07:32:35Z | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
library_name: transformers
model_name: Gemma3-Agricsense_lora
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for Gemma3-Agricsense_lora
This model is a fine-tuned version of [unsloth/gemma-3-4b-it-unsloth-bnb-4bit](https://huggingface.co/unsloth/gemma-3-4b-it-unsloth-bnb-4bit).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="IamJunhee/Gemma3-Agricsense_lora", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
plumpyfield/natix-hot20 | plumpyfield | 2025-05-27T19:48:48Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-05-27T19:48:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
plumpyfield/natix-hot8 | plumpyfield | 2025-05-27T19:48:35Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-05-27T19:48:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
one-girl-one-wolf-0/one.girl.one.wolf.viral.videos | one-girl-one-wolf-0 | 2025-05-27T19:48:16Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T19:47:51Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?new">โบโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐ค๏ธ​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?new">๐ดโบ๐๐๐๐๐ ๐๐๐๐ ๐==โบโบ ๐๐จ๐ฐ๐ง๐ฅ๐จ๐๐ ๐๐จ๐ฐโฌ๏ธโฌ๏ธ​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?new"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
ErikCikalleshi/Qwen3-1.7B-unsloth-bnb-4bit_alpaca_model_4bit | ErikCikalleshi | 2025-05-27T19:48:10Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Qwen3-1.7B-unsloth-bnb-4bit",
"base_model:quantized:unsloth/Qwen3-1.7B-unsloth-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2025-05-25T09:06:55Z | ---
base_model: unsloth/Qwen3-1.7B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ErikCikalleshi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-1.7B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
muqtasid87/gemma3b_finetuned_v2 | muqtasid87 | 2025-05-27T19:46:10Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"text-generation-inference",
"unsloth",
"gemma3",
"conversational",
"en",
"base_model:unsloth/gemma-3-4b-it-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T19:44:24Z | ---
base_model: unsloth/gemma-3-4b-it-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** muqtasid87
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
muqtasid87/gemma3_lora_adapters_v1 | muqtasid87 | 2025-05-27T19:44:15Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-27T19:27:49Z | ---
base_model: unsloth/gemma-3-4b-it-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** muqtasid87
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
logasanjeev/goemotions-bert | logasanjeev | 2025-05-27T19:41:37Z | 2,226 | 1 | transformers | [
"transformers",
"onnx",
"safetensors",
"text-classification",
"pytorch",
"multi-label-classification",
"multi-class-classification",
"emotion",
"bert",
"go_emotions",
"emotion-classification",
"sentiment-analysis",
"en",
"dataset:google-research-datasets/go_emotions",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2025-04-12T10:30:03Z | ---
language: en
license: mit
pipeline_tag: text-classification
tags:
- text-classification
- transformers
- pytorch
- onnx
- multi-label-classification
- multi-class-classification
- emotion
- bert
- go_emotions
- emotion-classification
- sentiment-analysis
datasets:
- google-research-datasets/go_emotions
metrics:
- f1
- precision
- recall
- accuracy
widget:
- text: Iโm just chilling today.
example_title: Neutral Example
- text: Thank you for saving my life!
example_title: Gratitude Example
- text: Iโm nervous about my exam tomorrow.
example_title: Nervousness Example
- text: I love my new puppy so much!
example_title: Love Example
- text: Iโm so relieved the storm passed.
example_title: Relief Example
base_model:
- google-bert/bert-base-uncased
base_model_relation: finetune
model-index:
- name: GoEmotions BERT Classifier
results:
- task:
type: multi-label-classification
dataset:
name: GoEmotions
type: google-research-datasets/go_emotions
metrics:
- name: Micro F1 (Optimized Thresholds)
type: micro-f1
value: 0.6006
- name: Macro F1
type: macro-f1
value: 0.539
- name: Precision
type: precision
value: 0.5371
- name: Recall
type: recall
value: 0.6812
- name: Hamming Loss
type: hamming-loss
value: 0.0377
- name: Avg Positive Predictions
type: avg-positive-predictions
value: 1.4789
- task:
type: multi-label-classification
dataset:
name: GoEmotions
type: google-research-datasets/go_emotions
metrics:
- name: F1 (admiration)
type: f1
value: 0.6987
- name: F1 (amusement)
type: f1
value: 0.8071
- name: F1 (anger)
type: f1
value: 0.503
- name: F1 (annoyance)
type: f1
value: 0.3892
- name: F1 (approval)
type: f1
value: 0.3915
- name: F1 (caring)
type: f1
value: 0.4473
- name: F1 (confusion)
type: f1
value: 0.4714
- name: F1 (curiosity)
type: f1
value: 0.5781
- name: F1 (desire)
type: f1
value: 0.5229
- name: F1 (disappointment)
type: f1
value: 0.3333
- name: F1 (disapproval)
type: f1
value: 0.4323
- name: F1 (disgust)
type: f1
value: 0.4926
- name: F1 (embarrassment)
type: f1
value: 0.4912
- name: F1 (excitement)
type: f1
value: 0.4571
- name: F1 (fear)
type: f1
value: 0.586
- name: F1 (gratitude)
type: f1
value: 0.9102
- name: F1 (grief)
type: f1
value: 0.3333
- name: F1 (joy)
type: f1
value: 0.6135
- name: F1 (love)
type: f1
value: 0.8065
- name: F1 (nervousness)
type: f1
value: 0.4348
- name: F1 (optimism)
type: f1
value: 0.5564
- name: F1 (pride)
type: f1
value: 0.5217
- name: F1 (realization)
type: f1
value: 0.2513
- name: F1 (relief)
type: f1
value: 0.5833
- name: F1 (remorse)
type: f1
value: 0.68
- name: F1 (sadness)
type: f1
value: 0.557
- name: F1 (surprise)
type: f1
value: 0.5562
- name: F1 (neutral)
type: f1
value: 0.6867
source:
name: Kaggle Evaluation Notebook
url: >-
https://www.kaggle.com/code/ravindranlogasanjeev/evaluation-logasanjeev-goemotions-bert/notebook
---
# GoEmotions BERT Classifier
Fine-tuned [BERT-base-uncased](https://huggingface.co/bert-base-uncased) on [GoEmotions](https://huggingface.co/datasets/go_emotions) for multi-label classification (28 emotions). This updated version includes improved Macro F1, ONNX support for efficient inference, and visualizations for better interpretability.
## Model Details
- **Architecture**: BERT-base-uncased (110M parameters)
- **Training Data**: [GoEmotions](https://huggingface.co/datasets/google-research-datasets/go_emotions) (58k Reddit comments, 28 emotions)
- **Loss Function**: Focal Loss (alpha=1, gamma=2)
- **Optimizer**: AdamW (lr=2e-5, weight_decay=0.01)
- **Epochs**: 5
- **Batch Size**: 16
- **Max Length**: 128
- **Hardware**: Kaggle P100 GPU (16GB)
## Try It Out
For accurate predictions with optimized thresholds, use the [Gradio demo](https://logasanjeev-goemotions-bert-demo.hf.space). The demo now includes preprocessed text and the top 5 predicted emotions, in addition to thresholded predictions. Example predictions:
- **Input**: "Iโm thrilled to win this award! ๐"
- **Output**: `excitement: 0.5836, joy: 0.5290`
- **Input**: "This is so frustrating, nothing works. ๐ฃ"
- **Output**: `annoyance: 0.6147, anger: 0.4669`
- **Input**: "I feel so sorry for what happened. ๐ข"
- **Output**: `sadness: 0.5321, remorse: 0.9107`
## Performance
- **Micro F1**: 0.6006 (optimized thresholds)
- **Macro F1**: 0.5390
- **Precision**: 0.5371
- **Recall**: 0.6812
- **Hamming Loss**: 0.0377
- **Avg Positive Predictions**: 1.4789
For a detailed evaluation, including class-wise accuracy, precision, recall, F1, MCC, support, and thresholds, along with visualizations, check out the [Kaggle notebook](https://www.kaggle.com/code/ravindranlogasanjeev/evaluation-logasanjeev-goemotions-bert/notebook).
### Class-Wise Performance
The following table shows per-class metrics on the test set using optimized thresholds (see `optimized_thresholds.json`):
| Emotion | Accuracy | Precision | Recall | F1 Score | MCC | Support | Threshold |
|---------------|----------|-----------|--------|----------|--------|---------|-----------|
| admiration | 0.9410 | 0.6649 | 0.7361 | 0.6987 | 0.6672 | 504 | 0.4500 |
| amusement | 0.9801 | 0.7635 | 0.8561 | 0.8071 | 0.7981 | 264 | 0.4500 |
| anger | 0.9694 | 0.6176 | 0.4242 | 0.5030 | 0.4970 | 198 | 0.4500 |
| annoyance | 0.9121 | 0.3297 | 0.4750 | 0.3892 | 0.3502 | 320 | 0.3500 |
| approval | 0.8843 | 0.2966 | 0.5755 | 0.3915 | 0.3572 | 351 | 0.3500 |
| caring | 0.9759 | 0.5196 | 0.3926 | 0.4473 | 0.4396 | 135 | 0.4500 |
| confusion | 0.9711 | 0.4861 | 0.4575 | 0.4714 | 0.4567 | 153 | 0.4500 |
| curiosity | 0.9368 | 0.4442 | 0.8275 | 0.5781 | 0.5783 | 284 | 0.4000 |
| desire | 0.9865 | 0.5714 | 0.4819 | 0.5229 | 0.5180 | 83 | 0.4000 |
| disappointment| 0.9565 | 0.2906 | 0.3907 | 0.3333 | 0.3150 | 151 | 0.3500 |
| disapproval | 0.9235 | 0.3405 | 0.5918 | 0.4323 | 0.4118 | 267 | 0.3500 |
| disgust | 0.9810 | 0.6250 | 0.4065 | 0.4926 | 0.4950 | 123 | 0.5500 |
| embarrassment | 0.9947 | 0.7000 | 0.3784 | 0.4912 | 0.5123 | 37 | 0.5000 |
| excitement | 0.9790 | 0.4486 | 0.4660 | 0.4571 | 0.4465 | 103 | 0.4000 |
| fear | 0.9836 | 0.4599 | 0.8077 | 0.5860 | 0.6023 | 78 | 0.3000 |
| gratitude | 0.9888 | 0.9450 | 0.8778 | 0.9102 | 0.9049 | 352 | 0.5500 |
| grief | 0.9985 | 0.3333 | 0.3333 | 0.3333 | 0.3326 | 6 | 0.3000 |
| joy | 0.9768 | 0.6061 | 0.6211 | 0.6135 | 0.6016 | 161 | 0.4500 |
| love | 0.9825 | 0.7826 | 0.8319 | 0.8065 | 0.7978 | 238 | 0.5000 |
| nervousness | 0.9952 | 0.4348 | 0.4348 | 0.4348 | 0.4324 | 23 | 0.4000 |
| optimism | 0.9689 | 0.5436 | 0.5699 | 0.5564 | 0.5405 | 186 | 0.4000 |
| pride | 0.9980 | 0.8571 | 0.3750 | 0.5217 | 0.5662 | 16 | 0.4000 |
| realization | 0.9737 | 0.5217 | 0.1655 | 0.2513 | 0.2838 | 145 | 0.4500 |
| relief | 0.9982 | 0.5385 | 0.6364 | 0.5833 | 0.5845 | 11 | 0.3000 |
| remorse | 0.9912 | 0.5426 | 0.9107 | 0.6800 | 0.6992 | 56 | 0.3500 |
| sadness | 0.9757 | 0.5845 | 0.5321 | 0.5570 | 0.5452 | 156 | 0.4500 |
| surprise | 0.9724 | 0.4772 | 0.6667 | 0.5562 | 0.5504 | 141 | 0.3500 |
| neutral | 0.7485 | 0.5821 | 0.8372 | 0.6867 | 0.5102 | 1787 | 0.4000 |
### Visualizations
#### Class-Wise F1 Scores

#### Training Curves

## Training Insights
The model was trained for 5 epochs with Focal Loss to handle class imbalance. Training and validation curves show consistent improvement:
- Training Loss decreased from 0.0429 to 0.0134.
- Validation Micro F1 peaked at 0.5874 (epoch 5).
- See the training curves plot above for details.
## Usage
### Quick Inference with inference.py (Recommended for PyTorch)
The easiest way to use the model with PyTorch is to programmatically fetch and use `inference.py` from the repository. The script handles all preprocessing, model loading, and inference for you.
#### Programmatic Download and Inference
Run the following Python script to download `inference.py` and make predictions:
```python
!pip install transformers torch huggingface_hub emoji -q
import shutil
import os
from huggingface_hub import hf_hub_download
from importlib import import_module
repo_id = "logasanjeev/goemotions-bert"
local_file = hf_hub_download(repo_id=repo_id, filename="inference.py")
current_dir = os.getcwd()
destination = os.path.join(current_dir, "inference.py")
shutil.copy(local_file, destination)
inference_module = import_module("inference")
predict_emotions = inference_module.predict_emotions
text = "Iโm thrilled to win this award! ๐"
result, processed = predict_emotions(text)
print(f"Input: {text}")
print(f"Processed: {processed}")
print("Predicted Emotions:")
print(result)
```
#### Expected Output:
```
Input: Iโm thrilled to win this award! ๐
Processed: iโm thrilled to win this award ! grinning_face_with_smiling_eyes
Predicted Emotions:
excitement: 0.5836
joy: 0.5290
```
#### Alternative: Manual Download
If you prefer to download `inference.py` manually:
1. Install the required dependencies:
```bash
pip install transformers torch huggingface_hub emoji
```
2. Download `inference.py` from the repository.
3. Use it in Python or via the command line.
**Python Example:**
```python
from inference import predict_emotions
result, processed = predict_emotions("Iโm thrilled to win this award! ๐")
print(f"Input: Iโm thrilled to win this award! ๐")
print(f"Processed: {processed}")
print("Predicted Emotions:")
print(result)
```
**Command-Line Example:**
```bash
python inference.py "Iโm thrilled to win this award! ๐"
```
### Quick Inference with onnx_inference.py (Recommended for ONNX)
For faster and more efficient inference using ONNX, you can use `onnx_inference.py`. This script leverages ONNX Runtime for inference, which is typically more lightweight than PyTorch.
#### Programmatic Download and Inference
Run the following Python script to download `onnx_inference.py` and make predictions:
```python
!pip install transformers onnxruntime huggingface_hub emoji numpy -q
import shutil
import os
from huggingface_hub import hf_hub_download
from importlib import import_module
repo_id = "logasanjeev/goemotions-bert"
local_file = hf_hub_download(repo_id=repo_id, filename="onnx_inference.py")
current_dir = os.getcwd()
destination = os.path.join(current_dir, "onnx_inference.py")
shutil.copy(local_file, destination)
onnx_inference_module = import_module("onnx_inference")
predict_emotions = onnx_inference_module.predict_emotions
text = "Iโm thrilled to win this award! ๐"
result, processed = predict_emotions(text)
print(f"Input: {text}")
print(f"Processed: {processed}")
print("Predicted Emotions:")
print(result)
```
#### Expected Output:
```
Input: Iโm thrilled to win this award! ๐
Processed: iโm thrilled to win this award ! grinning_face_with_smiling_eyes
Predicted Emotions:
excitement: 0.5836
joy: 0.5290
```
#### Alternative: Manual Download
If you prefer to download `onnx_inference.py` manually:
1. Install the required dependencies:
```bash
pip install transformers onnxruntime huggingface_hub emoji numpy
```
2. Download `onnx_inference.py` from the repository.
3. Use it in Python or via the command line.
**Python Example:**
```python
from onnx_inference import predict_emotions
result, processed = predict_emotions("Iโm thrilled to win this award! ๐")
print(f"Input: Iโm thrilled to win this award! ๐")
print(f"Processed: {processed}")
print("Predicted Emotions:")
print(result)
```
**Command-Line Example:**
```bash
python onnx_inference.py "Iโm thrilled to win this award! ๐"
```
### Preprocessing
Before inference, preprocess text to match training conditions:
- Replace user mentions (`u/username`) with `[USER]`.
- Replace subreddits (`r/subreddit`) with `[SUBREDDIT]`.
- Replace URLs with `[URL]`.
- Convert emojis to text using `emoji.demojize` (e.g., ๐ โ `smiling_face_with_smiling_eyes`).
- Lowercase the text.
### PyTorch Inference
```python
from transformers import BertForSequenceClassification, BertTokenizer
import torch
import json
import requests
import re
import emoji
def preprocess_text(text):
text = re.sub(r'u/\w+', '[USER]', text)
text = re.sub(r'r/\w+', '[SUBREDDIT]', text)
text = re.sub(r'http[s]?://\S+', '[URL]', text)
text = emoji.demojize(text, delimiters=(" ", " "))
text = text.lower()
return text
repo_id = "logasanjeev/goemotions-bert"
model = BertForSequenceClassification.from_pretrained(repo_id)
tokenizer = BertTokenizer.from_pretrained(repo_id)
thresholds_url = f"https://huggingface.co/{repo_id}/raw/main/optimized_thresholds.json"
thresholds_data = json.loads(requests.get(thresholds_url).text)
emotion_labels = thresholds_data["emotion_labels"]
thresholds = thresholds_data["thresholds"]
text = "Iโm just chilling today."
processed_text = preprocess_text(text)
encodings = tokenizer(processed_text, padding='max_length', truncation=True, max_length=128, return_tensors='pt')
with torch.no_grad():
logits = torch.sigmoid(model(**encodings).logits).numpy()[0]
predictions = [(emotion_labels[i], round(logit, 4)) for i, (logit, thresh) in enumerate(zip(logits, thresholds)) if logit >= thresh]
predictions = sorted(predictions, key=lambda x: x[1], reverse=True)
print(predictions)
# Output: [('neutral', 0.8147)]
```
### ONNX Inference
For a simplified ONNX inference experience, use `onnx_inference.py` as shown above. Alternatively, you can use the manual approach below:
```python
import onnxruntime as ort
import numpy as np
onnx_url = f"https://huggingface.co/{repo_id}/raw/main/model.onnx"
with open("model.onnx", "wb") as f:
f.write(requests.get(onnx_url).content)
text = "Iโm thrilled to win this award! ๐"
processed_text = preprocess_text(text)
encodings = tokenizer(processed_text, padding='max_length', truncation=True, max_length=128, return_tensors='np')
session = ort.InferenceSession("model.onnx")
inputs = {
'input_ids': encodings['input_ids'].astype(np.int64),
'attention_mask': encodings['attention_mask'].astype(np.int64)
}
logits = session.run(None, inputs)[0][0]
logits = 1 / (1 + np.exp(-logits)) # Sigmoid
predictions = [(emotion_labels[i], round(logit, 4)) for i, (logit, thresh) in enumerate(zip(logits, thresholds)) if logit >= thresh]
predictions = sorted(predictions, key=lambda x: x[1], reverse=True)
print(predictions)
# Output: [('excitement', 0.5836), ('joy', 0.5290)]
```
## License
This model is licensed under the MIT License. See [LICENSE](LICENSE) for details.
## Usage Notes
- The model performs best on Reddit-style comments with similar preprocessing.
- Rare emotions (e.g., `grief`, support=6) have lower F1 scores due to limited data.
- ONNX inference requires `onnxruntime` and compatible hardware (opset 14).
## Inference Providers
This model isn't deployed by any Inference Provider. ๐ Ask for provider support |
plumpyfield/natix-hot9 | plumpyfield | 2025-05-27T19:41:36Z | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2025-05-27T19:34:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a ๐ค transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
one-girl-one-wolf-viral-video/one.girl.one.wolf.viral.video.hd | one-girl-one-wolf-viral-video | 2025-05-27T19:40:42Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T19:40:16Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?new">โบโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐ค๏ธ​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?new">๐ดโบ๐๐๐๐๐ ๐๐๐๐ ๐==โบโบ ๐๐จ๐ฐ๐ง๐ฅ๐จ๐๐ ๐๐จ๐ฐโฌ๏ธโฌ๏ธ​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?new"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
phospho-app/MarcWester-ACT-m8-acu8f | phospho-app | 2025-05-27T19:35:38Z | 0 | 0 | null | [
"safetensors",
"phosphobot",
"act",
"region:us"
]
| null | 2025-05-27T17:18:04Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [MarcWester/m8](https://huggingface.co/datasets/MarcWester/m8)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 40
- **Training steps**: 8000
๐ **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
๐ค **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
othoi-113-viral-video-link-hq/othoiiii.viral.video.link.othoi.viral.video.link.1.13.seconds | othoi-113-viral-video-link-hq | 2025-05-27T19:35:05Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T19:34:41Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?new">โบโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐ค๏ธ​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?new">๐ดโบ๐๐๐๐๐ ๐๐๐๐ ๐==โบโบ ๐๐จ๐ฐ๐ง๐ฅ๐จ๐๐ ๐๐จ๐ฐโฌ๏ธโฌ๏ธ​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?new"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
wATCH-Sophie-Rain-Sophie-Rain-Videoss/Sophie.Rain.Spiderman.Video.Tutorial | wATCH-Sophie-Rain-Sophie-Rain-Videoss | 2025-05-27T19:33:26Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T19:25:19Z | 18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">โบโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐ค๏ธโ</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">๐ดโบ๐๐๐๐๐ ๐๐๐๐ ๐==โบโบ ๐๐จ๐ฐ๐ง๐ฅ๐จ๐๐ ๐๐จ๐ฐโฌ๏ธโฌ๏ธโ</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">โบโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐ค๏ธโ</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">๐ดโบ๐๐๐๐๐ ๐๐๐๐ ๐==โบโบ ๐๐จ๐ฐ๐ง๐ฅ๐จ๐๐ ๐๐จ๐ฐโฌ๏ธโฌ๏ธโ</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
. . . . . . . . . L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter Telegram
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video
L๐aked Video Sophie Rain Spiderman Video Leaked Original Video แด ษชสแดส Video L๐aked on X Twitter Telegram
[-wATCH-]โ Sophie Rain Spiderman Video Leaked Video Original Video Link Sophie Rain Spiderman Video Leaked Video แด ษชสแดส On Social Media X Trending Now
[-wATCH-]โ Sophie Rain Spiderman Video Leaked Video แด ษชสแดส On Social Media หฃ แตสทโฑแตแตแตสณ
[-wATCH-]โ Sophie Rain Spiderman Video Leaked Video Original Video Link Sophie Rain Spiderman Video Leaked Video แด ษชสแดส On Social Media X Trending Now
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter Telegram
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
. . . . . . . . . L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter Telegram
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter. , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">โบโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐ค๏ธโ</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">๐ดโบ๐๐๐๐๐ ๐๐๐๐ ๐==โบโบ ๐๐จ๐ฐ๐ง๐ฅ๐จ๐๐ ๐๐จ๐ฐโฌ๏ธโฌ๏ธโ</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
โ Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video. Leaked Video Sophie ...27 seconds ago - Sophie Rain Spiderman Viral Video Original Viral video took the internet by storm and amazed viewers on various social media platforms. Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video.
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video
18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">โบโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐ค๏ธโ</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">๐ดโบ๐๐๐๐๐ ๐๐๐๐ ๐==โบโบ ๐๐จ๐ฐ๐ง๐ฅ๐จ๐๐ ๐๐จ๐ฐโฌ๏ธโฌ๏ธโ</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
. . . . . . . . . L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter Telegram
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video
L๐aked Video Sophie Rain Spiderman Video Leaked Original Video แด ษชสแดส Video L๐aked on X Twitter Telegram
[-wATCH-]โ Sophie Rain Spiderman Video Leaked Video Original Video Link Sophie Rain Spiderman Video Leaked Video แด ษชสแดส On Social Media X Trending Now
[-wATCH-]โ Sophie Rain Spiderman Video Leaked Video แด ษชสแดส On Social Media หฃ แตสทโฑแตแตแตสณ
[-wATCH-]โ Sophie Rain Spiderman Video Leaked Video Original Video Link Sophie Rain Spiderman Video Leaked Video แด ษชสแดส On Social Media X Trending Now
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter Telegram
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
. . . . . . . . . L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter Telegram
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter. , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">โบโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐ค๏ธโ</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">๐ดโบ๐๐๐๐๐ ๐๐๐๐ ๐==โบโบ ๐๐จ๐ฐ๐ง๐ฅ๐จ๐๐ ๐๐จ๐ฐโฌ๏ธโฌ๏ธโ</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
โ Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video. Leaked Video Sophie ...27 seconds ago - Sophie Rain Spiderman Viral Video Original Viral video took the internet by storm and amazed viewers on various social media platforms. Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video.
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video
18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">โบโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐ค๏ธโ</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">๐ดโบ๐๐๐๐๐ ๐๐๐๐ ๐==โบโบ ๐๐จ๐ฐ๐ง๐ฅ๐จ๐๐ ๐๐จ๐ฐโฌ๏ธโฌ๏ธโ</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
. . . . . . . . . L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter Telegram
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video
L๐aked Video Sophie Rain Spiderman Video Leaked Original Video แด ษชสแดส Video L๐aked on X Twitter Telegram
[-wATCH-]โ Sophie Rain Spiderman Video Leaked Video Original Video Link Sophie Rain Spiderman Video Leaked Video แด ษชสแดส On Social Media X Trending Now
[-wATCH-]โ Sophie Rain Spiderman Video Leaked Video แด ษชสแดส On Social Media หฃ แตสทโฑแตแตแตสณ
[-wATCH-]โ Sophie Rain Spiderman Video Leaked Video Original Video Link Sophie Rain Spiderman Video Leaked Video แด ษชสแดส On Social Media X Trending Now
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter Telegram
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
. . . . . . . . . L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter Telegram
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter. , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">โบโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐ค๏ธโ</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">๐ดโบ๐๐๐๐๐ ๐๐๐๐ ๐==โบโบ ๐๐จ๐ฐ๐ง๐ฅ๐จ๐๐ ๐๐จ๐ฐโฌ๏ธโฌ๏ธโ</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
โ Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video. Leaked Video Sophie ...27 seconds ago - Sophie Rain Spiderman Viral Video Original Viral video took the internet by storm and amazed viewers on various social media platforms. Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video.
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video
18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">โบโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐ค๏ธโ</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">๐ดโบ๐๐๐๐๐ ๐๐๐๐ ๐==โบโบ ๐๐จ๐ฐ๐ง๐ฅ๐จ๐๐ ๐๐จ๐ฐโฌ๏ธโฌ๏ธโ</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
. . . . . . . . . L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter Telegram
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video
L๐aked Video Sophie Rain Spiderman Video Leaked Original Video แด ษชสแดส Video L๐aked on X Twitter Telegram
[-wATCH-]โ Sophie Rain Spiderman Video Leaked Video Original Video Link Sophie Rain Spiderman Video Leaked Video แด ษชสแดส On Social Media X Trending Now
[-wATCH-]โ Sophie Rain Spiderman Video Leaked Video แด ษชสแดส On Social Media หฃ แตสทโฑแตแตแตสณ
[-wATCH-]โ Sophie Rain Spiderman Video Leaked Video Original Video Link Sophie Rain Spiderman Video Leaked Video แด ษชสแดส On Social Media X Trending Now
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter Telegram
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
. . . . . . . . . L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter Telegram
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter. , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">โบโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐ค๏ธโ</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">๐ดโบ๐๐๐๐๐ ๐๐๐๐ ๐==โบโบ ๐๐จ๐ฐ๐ง๐ฅ๐จ๐๐ ๐๐จ๐ฐโฌ๏ธโฌ๏ธโ</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
โ Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video. Leaked Video Sophie ...27 seconds ago - Sophie Rain Spiderman Viral Video Original Viral video took the internet by storm and amazed viewers on various social media platforms. Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video.
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video
18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">โบโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐ค๏ธโ</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">๐ดโบ๐๐๐๐๐ ๐๐๐๐ ๐==โบโบ ๐๐จ๐ฐ๐ง๐ฅ๐จ๐๐ ๐๐จ๐ฐโฌ๏ธโฌ๏ธโ</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
. . . . . . . . . L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter Telegram
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video
L๐aked Video Sophie Rain Spiderman Video Leaked Original Video แด ษชสแดส Video L๐aked on X Twitter Telegram
[-wATCH-]โ Sophie Rain Spiderman Video Leaked Video Original Video Link Sophie Rain Spiderman Video Leaked Video แด ษชสแดส On Social Media X Trending Now
[-wATCH-]โ Sophie Rain Spiderman Video Leaked Video แด ษชสแดส On Social Media หฃ แตสทโฑแตแตแตสณ
[-wATCH-]โ Sophie Rain Spiderman Video Leaked Video Original Video Link Sophie Rain Spiderman Video Leaked Video แด ษชสแดส On Social Media X Trending Now
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter Telegram
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
. . . . . . . . . L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter Telegram
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter. , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">โบโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐ค๏ธโ</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">๐ดโบ๐๐๐๐๐ ๐๐๐๐ ๐==โบโบ ๐๐จ๐ฐ๐ง๐ฅ๐จ๐๐ ๐๐จ๐ฐโฌ๏ธโฌ๏ธโ</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
โ Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video. Leaked Video Sophie ...27 seconds ago - Sophie Rain Spiderman Viral Video Original Viral video took the internet by storm and amazed viewers on various social media platforms. Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video.
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video
18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">โบโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐ค๏ธโ</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">๐ดโบ๐๐๐๐๐ ๐๐๐๐ ๐==โบโบ ๐๐จ๐ฐ๐ง๐ฅ๐จ๐๐ ๐๐จ๐ฐโฌ๏ธโฌ๏ธโ</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
. . . . . . . . . L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter Telegram
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video
L๐aked Video Sophie Rain Spiderman Video Leaked Original Video แด ษชสแดส Video L๐aked on X Twitter Telegram
[-wATCH-]โ Sophie Rain Spiderman Video Leaked Video Original Video Link Sophie Rain Spiderman Video Leaked Video แด ษชสแดส On Social Media X Trending Now
[-wATCH-]โ Sophie Rain Spiderman Video Leaked Video แด ษชสแดส On Social Media หฃ แตสทโฑแตแตแตสณ
[-wATCH-]โ Sophie Rain Spiderman Video Leaked Video Original Video Link Sophie Rain Spiderman Video Leaked Video แด ษชสแดส On Social Media X Trending Now
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter Telegram
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
. . . . . . . . . L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter Telegram
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
L๐aked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video L๐aked on X Twitter. , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,
18 seconds ago
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">โบโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐ค๏ธโ</a></p>
<a href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman" rel="nofollow">๐ดโบ๐๐๐๐๐ ๐๐๐๐ ๐==โบโบ ๐๐จ๐ฐ๐ง๐ฅ๐จ๐๐ ๐๐จ๐ฐโฌ๏ธโฌ๏ธโ</a></p>
<p><a rel="nofollow" title="WATCH NOW" href="https://tv2online.com/Leaked/?v=Sophie+Rain+Spiderman"><img border="Sophie+Rain+Spidermanno" height="480" width="720" title="WATCH NOW" alt="WATCH NOW" src="https://i.ibb.co.com/xMMVF88/686577567.gif"></a></p>
โ Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video. Leaked Video Sophie ...27 seconds ago - Sophie Rain Spiderman Viral Video Original Viral video took the internet by storm and amazed viewers on various social media platforms. Sophie Rain Spiderman Video, a young and talented digital creator, recently became famous thanks to this interesting video.
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter Telegram
Leaked Video Sophie Rain Spiderman Video Tutorial Original Video Viral Video Leaked on X Twitter
Sophie Rain Spiderman Video Tutorial Original Video video oficial twitter
Related Search :
sophie rain nude
sophie rain porn
sophie rain naked
sophie rain nudes
sophie rain leaks
sophie rain onlyfans
sophie rain leaked
sophie rain spiderman video
sophie rain leak
sophie rain age
sophie rain spiderman
sophie rain pussy
sophie rain xxx
sophie rain sex tape
sophie rain spider man
sophie rain spiderman video oficial
sophie rain leaked nudes
sophie rain onlyfans leaked
sophie rain erome
sophie rain spiderman video instagram
sophie rain spiderman leak
sophie rain spiderman video tutorial
sophie rain spiderman video twitter
sophie rain spiderman vid
sophie rain spiderman video leaked
sophie rain spiderman porn
sophie rain spiderman video oficial twitter
sophie rain spiderman video tiktok original
spider man sophie rain spiderman
sophie rain spiderman leaked
sophie rain spiderman video leak
sophie rain spiderman twitter
sophie rain spiderman xxx
sophie rain spiderman video xxx
sophie rain spiderman tiktok
sophie rain spiderman video instagram full video
|
minpeter/FLUX-majicflus-v1-diffusers | minpeter | 2025-05-27T19:32:49Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"endpoints_compatible",
"diffusers:FluxPipeline",
"region:us"
]
| text-to-image | 2025-05-27T16:57:11Z | ---
base_model:
- black-forest-labs/FLUX.1-dev
library_name: diffusers
---
# majicflus v1
Convert single file diffusers weights to load from_pretrained in a very simple way.
```python
from diffusers import FluxPipeline, FluxTransformer2DModel
import torch
dtype = torch.float8_e4m3fn
transformer = (
FluxTransformer2DModel.from_single_file(
# Remove the "model.diffusion_model." prefix from the safetensors key
"./majicflus_v1_cleaned.safetensors",
torch_dtype=dtype,
)
.to("cuda")
.to(torch.bfloat16)
)
pipe = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev", transformer=transformer
)
pipe.save_pretrained("./majicflus_v1_conv_diffusers/model")
``` |
nimra-mehra-hd/Link.Video.18.nimra.mehra.jobz.hunting.video.nimra.mehra.video.nimra.mehra | nimra-mehra-hd | 2025-05-27T19:30:23Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T19:26:02Z | [๐ CLICK HERE ๐ข==โบโบ WATCH NOW](https://videohere.top/?V=nimra-mehra)
[๐ด CLICK HERE ๐==โบโบ Download Now)](https://videohere.top/?V=nimra-mehra)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=nimra-mehra) |
othoi-apu-viral-video-link/VIDEO.18.Othoi.1.13.Viral.Video.Full.Video.Original.Clip | othoi-apu-viral-video-link | 2025-05-27T19:23:28Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T19:23:05Z | <a rel="nofollow" href="https://viralflix.xyz/leaked/?new">โบโบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐ค๏ธ​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?new">๐ดโบ๐๐๐๐๐ ๐๐๐๐ ๐==โบโบ ๐๐จ๐ฐ๐ง๐ฅ๐จ๐๐ ๐๐จ๐ฐโฌ๏ธโฌ๏ธ​</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?new"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
|
allura-forge/q3-8b-rc1 | allura-forge | 2025-05-27T19:22:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:Qwen/Qwen3-8B-Base",
"base_model:merge:Qwen/Qwen3-8B-Base",
"base_model:allura-forge/q3-8b-ft-ep2-merged",
"base_model:merge:allura-forge/q3-8b-ft-ep2-merged",
"base_model:allura-org/remnant-qwen3-8b",
"base_model:merge:allura-org/remnant-qwen3-8b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2025-05-27T19:21:36Z | ---
base_model:
- allura-forge/q3-8b-ft-ep2-merged
- Qwen/Qwen3-8B-Base
- allura-org/remnant-qwen3-8b
library_name: transformers
tags:
- mergekit
- merge
---
# output
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base) as a base.
### Models Merged
The following models were included in the merge:
* [allura-forge/q3-8b-ft-ep2-merged](https://huggingface.co/allura-forge/q3-8b-ft-ep2-merged)
* [allura-org/remnant-qwen3-8b](https://huggingface.co/allura-org/remnant-qwen3-8b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: Qwen/Qwen3-8B-Base
models:
- model: allura-forge/q3-8b-ft-ep2-merged
parameters:
weight: 0.75
density: 0.9
- model: allura-org/remnant-qwen3-8b
parameters:
weight: 0.25
density: 0.5
merge_method: ties
dtype: bfloat16
```
|
Lubna-qureshi-Hd/lubna.qureshi.viral.video.HOT.NEws.Today.Trending.Latest.Video | Lubna-qureshi-Hd | 2025-05-27T19:20:58Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T19:17:26Z | [๐ CLICK HERE ๐ข==โบโบ WATCH NOW](https://videohere.top/?V=Lubna-qureshi)
[๐ด CLICK HERE ๐==โบโบ Download Now)](https://videohere.top/?V=Lubna-qureshi)
[<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Lubna-qureshi) |
ErikCikalleshi/alpaca_lora_model | ErikCikalleshi | 2025-05-27T19:19:51Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2025-05-22T19:35:59Z | ---
base_model: unsloth/qwen3-14b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ErikCikalleshi
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Makrrr/ppo-Huggy | Makrrr | 2025-05-27T19:19:19Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
]
| reinforcement-learning | 2025-05-27T19:19:14Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog ๐ถ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Makrrr/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play ๐
|
othoi-1-13-viral/EXCLUSIVE.TRENDING.CLIP.othoi.113.Viral.Video.Leaks.Official | othoi-1-13-viral | 2025-05-27T19:18:34Z | 0 | 0 | null | [
"region:us"
]
| null | 2025-05-27T19:11:12Z | [๐ด โคโบ๐๐ฅ๐ข๐ค ๐๐๐ซ๐ ๐ญ๐จ๐๐ (๐
๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐๐จ ๐๐ข๐ง๐ค )](https://videohere.top/?othoi-113)
[โบโ
๐พ๐๐๐พ๐ ๐๐๐๐ ==โบโบ ๐๐ช๐ก๐ก ๐๐๐๐๐คโค๏ธโค๏ธโฌ๏ธโฌ๏ธโ](https://videohere.top/?othoi-113)
[<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?othoi-113) |
Subsets and Splits