modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-02 18:27:42
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 549
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-02 18:24:50
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Ali-Mhrez/arbertv2-finetuned-noheadline512-afcsdc-stance-detection
|
Ali-Mhrez
| 2025-06-14T08:13:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-14T08:12:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
abhishekhgalphade/layoutlmv3-finetuned-legal-fields
|
abhishekhgalphade
| 2025-06-14T08:08:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"layoutlmv3",
"token-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2025-06-14T08:05:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MetaphoricalCode/Dans-DangerousWinds-V1.1.0-12b-exl3-4bpw-hb6
|
MetaphoricalCode
| 2025-06-14T08:06:51Z | 0 | 0 | null |
[
"safetensors",
"mistral",
"en",
"dataset:PocketDoc/Dans-Prosemaxx-Adventure",
"dataset:PocketDoc/Dans-Failuremaxx-Adventure",
"dataset:PocketDoc/Dans-Prosemaxx-Cowriter-2-S",
"base_model:PocketDoc/Dans-DangerousWinds-V1.1.0-12b",
"base_model:quantized:PocketDoc/Dans-DangerousWinds-V1.1.0-12b",
"license:apache-2.0",
"4-bit",
"exl3",
"region:us"
] | null | 2025-06-14T08:00:51Z |
---
license: apache-2.0
datasets:
- PocketDoc/Dans-Prosemaxx-Adventure
- PocketDoc/Dans-Failuremaxx-Adventure
- PocketDoc/Dans-Prosemaxx-Cowriter-2-S
language:
- en
base_model:
- PocketDoc/Dans-DangerousWinds-V1.1.0-12b
base_model_relation: quantized
---
## Quantized using the default exllamav3 (0.0.3) quantization process.
- Original model: https://huggingface.co/PocketDoc/Dans-DangerousWinds-V1.1.0-12b
- exllamav3: https://github.com/turboderp-org/exllamav3
---
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
</head>
<div class="crt-container">
<div class="crt-case">
<div class="crt-inner-case">
<div class="crt-bezel">
<div class="terminal-screen">
<div style="text-align: center;">
<h2>Dans-DangerousWinds-V1.1.0-12b</h2>
<pre class="code-block" style="display: inline-block; text-align: left;"> (( "####@@!!$$ ))
`#####@@!$$` ))
(( '####@!!$:
(( ,####@!!$: ))
.###@!!$:
`##@@!$:
`#@!!$
!@# `#@!$: @#$
#$ `#@!$: !@!
'@!$:
'`\ "!$: /`'
'\ '!: /'
"\ : /"
-."-/\\\-."//.-"/:`\."-.JrS"."-=_\\
" -."-.\\"-."//.-".`-."_\\-.".-\".-//</pre>
</div>
<p>An old school text adventure model with a predilection for making the player fumble, often fatally.</p>
<h3>Key Details</h3>
<pre class="code-block">
BASE MODEL: mistralai/Mistral-Nemo-Base-2407
LICENSE: apache-2.0
LANGUAGE: English
CONTEXT LENGTH: 8192 tokens</pre>
<h3>Recommended Settings</h3>
<pre class="code-block">
TEMPERATURE: 1.0
TOP_P: 0.95
MIN_P: 0.05</pre>
<h3>Prompting Format</h3>
<p>The model uses old school "adventure" format:</p>
<pre class="code-block">
system prompt
\> user prompt
model output
\> user prompt
model output</pre>
<h3>Example Outputs</h3>
<details>
<summary>Sample Adventure #1</summary>
<p><em>Themes: CyberPunk, Sci-Fi</em></p>
<p><strong>></strong> You look around to get your bearings.</p>
<p>The sky is the color of an old bruise and it seems to be raining blood. The ground is slick with the stuff, but you can still make out what must have been a huge battle zone at one point. The remnants of what appear to have been military vehicles are scattered about in various states of disrepair - some have had their armor stripped away, others lie in mangled heaps or are simply empty shells with gaping holes where their weapons used to be.
In the distance, you spot the remains of a large building that may once have been a military base or compound. Its walls are riddled with bullet holes and chunks of masonry lie everywhere. There's no telling how long this place has been abandoned, but whatever happened here was definitely not pretty.</p>
<p><strong>></strong> inventory</p>
<p>You pat yourself down. In your pockets, you find:</p>
<p>- A small flashlight (batteries low)</p>
<p>- A multitool with several attachments (including a mini saw blade)</p>
<p>- A data chip containing encrypted files (labeled "Classified")</p>
<p>- Some ration packs (several days' worth)</p>
<p><strong>></strong> who am i?</p>
<p>Who are YOU? That's a great question! Unfortunately, you seem to have lost most of your memories during the battle. You don't even know why you're here. But you do recall one important detail...
Your name is [REDACTED]. You were part of an elite team sent on a mission deep behind enemy lines. What you were doing here, and whether you succeeded, is now lost to you. All you know for certain is that something went terribly wrong, and you are the only survivor.
Now it's up to you to piece together what happened, find a way out of this hellish landscape, and uncover the truth about your mission. Good luck, [REDACTED] - you're going to need it.</p>
<p><strong>></strong> well lets pick through the vehicles</p>
<p>You start picking your way through the wrecked vehicles, searching for anything useful. As you move past an armored personnel carrier with its door hanging open, you hear a faint whirring sound from inside. Before you can react, a metallic appendage swings out, grabbing your ankle. The appendage belongs to a cybernetically augmented combat drone that had been hidden in the shadows!
The drone's pincer grip is impossibly strong - you can't break free as it drags you inside the APC. Through the shattered viewports, you catch glimpses of the red-tinted world outside as the drone reels you in like a fish on a line. Its single optic sensor blinks menacingly at you as more mechanical limbs emerge from the vehicle's interior.
This thing is going to tear you apart, unless you can somehow disable it first...</p>
<p><strong>></strong> i try to shut it down using my access codes</p>
<p>As you attempt to input the shutdown sequence, the combat drone suddenly emits a high-pitched screech.
Its metal limbs coil tighter around you, squeezing with crushing force. The pressure causes your bones to splinter, pulverizing your limbs before the drone finally crushes your chest cavity, puncturing vital organs.
Your vision dims as you feel the warm rush of your own lifeblood draining out onto the cold metal floor of the APC. Your body goes limp, but the drone continues constricting until all that remains is a pulped, unrecognizable mass of meat and bone fragments.
With a grinding sound, the drone releases the mess, which falls in a heap next to the shredded remains of your equipment. The machine gives a satisfied chirp before returning to its dormant state, ready to claim another victim.</p>
</details>
<details>
<summary>Sample Adventure #2</summary>
<p><em>Themes: Gritty, Low fantasy, adventure</em></p>
<p><strong>></strong> You look around to get your bearings.</p>
<p>The ground is a dark gray with white speckles. A large river flows down the center of the valley, flowing from the north and disappearing to the south into a large cave entrance. The eastern side of the valley is rocky, and looks like it might be climbable. The western side has a steep slope, but you can see some trees growing there. The air is warm and humid.
You hear the sound of something large moving in the trees on the western side of the valley. You're not sure what it is, but you should probably avoid it.</p>
<p><strong>></strong> what am i doing here again?</p>
<p>You've been through this before. You are an adventurer who wandered too close to a wizard's lair, and was turned into a slime monster by one of his experiments gone wrong! Now you must find a way back to your normal form, or perish as a mindless creature of goop.</p>
<p><strong>></strong> dang thats rough</p>
<p>As a blob of sentient ooze, you have limited options for expressing frustration. You pulsate angrily, sloshing about in your gelatinous form. Some of your slime splatters onto a nearby rock, where it begins to corrode the surface. Slowly but surely, your acidic secretions etch "Dang, that's rough" into the stone - a lasting testament to your existential woes as a slime monster.</p>
<p><strong>></strong> ha</p>
<p>Your laugh comes out as an odd gurgling sound, bubbles forming at the surface of your gelatinous body. A small globule breaks free and sails through the air, splatting wetly against a rock. You watch in amusement as the impact causes the rock to partially dissolve under the acidic effects of your ooze.
"I guess being a slime does have its... benefits?" you think, chuckling inwardly at your own morbid humor. The little victories help keep your spirits up while trapped in this disgusting, shapeless form.</p>
</details>
<h3>Support Development</h3>
<p>Development is limited by funding and resources. To help support:</p>
<p>- Contact on HF</p>
<p>- Email: [email protected]</p>
<p class="coffee-container">
<a href="https://www.buymeacoffee.com/visually" target="_blank" rel="noopener noreferrer">
<img src="https://cdn.buymeacoffee.com/buttons/v2/default-yellow.png" alt="Buy Me A Coffee" height="45" width="162">
</a>
</p>
</div>
</div>
</div>
</div>
</div>
<style>
@import url('https://fonts.googleapis.com/css2?family=VT323&display=swap');
.crt-container {
padding: 10px;
max-width: 1000px;
margin: 0 auto;
width: 95%;
}
.crt-case {
background: #e8d7c3;
border-radius: 10px;
padding: 15px;
box-shadow: inset -2px -2px 5px rgba(0,0,0,0.3), 2px 2px 5px rgba(0,0,0,0.2);
}
.crt-inner-case {
background: #e8d7c3;
border-radius: 8px;
padding: 3px;
box-shadow: inset -1px -1px 4px rgba(0,0,0,0.3), 1px 1px 4px rgba(0,0,0,0.2);
}
.crt-bezel {
background: linear-gradient(145deg, #1a1a1a, #2a2a2a);
padding: 15px;
border-radius: 5px;
border: 3px solid #0a0a0a;
position: relative;
box-shadow:
inset 0 0 20px rgba(0,0,0,0.5),
inset 0 0 4px rgba(0,0,0,0.4),
inset 2px 2px 4px rgba(255,255,255,0.05),
inset -2px -2px 4px rgba(0,0,0,0.8),
0 0 2px rgba(0,0,0,0.6),
-1px -1px 4px rgba(255,255,255,0.1),
1px 1px 4px rgba(0,0,0,0.3);
}
.crt-bezel::before {
content: '';
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(45deg,
rgba(255,255,255,0.03) 0%,
rgba(255,255,255,0) 40%,
rgba(0,0,0,0.1) 60%,
rgba(0,0,0,0.2) 100%);
border-radius: 3px;
pointer-events: none;
}
.terminal-screen {
background: #111112;
padding: 20px;
border-radius: 15px;
position: relative;
overflow: hidden;
font-family: 'VT323', monospace;
font-size: clamp(12px, 1.5vw, 16px);
color: #e49b3e;
line-height: 1.4;
text-shadow: 0 0 2px #e49b3e;
animation: flicker 0.15s infinite;
filter: brightness(1.1) contrast(1.1);
box-shadow:
inset 0 0 30px rgba(0,0,0,0.9),
inset 0 0 8px rgba(0,0,0,0.8),
0 0 5px rgba(0,0,0,0.6);
max-width: 80ch;
margin: 0 auto;
}
.terminal-screen h2, .terminal-screen h3 {
font-size: clamp(16px, 2vw, 20px);
margin-bottom: 1em;
color: #e49b3e;
}
.terminal-screen pre.code-block {
font-size: clamp(11px, 1.3vw, 14px);
white-space: pre-wrap;
margin: 1em 0;
background-color: #1a1a1a;
padding: 1em;
border-radius: 4px;
color: #e49b3e;
}
.terminal-screen::before {
content: "";
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: linear-gradient(rgba(18, 16, 16, 0) 50%, rgba(0, 0, 0, 0.25) 50%), url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADIAAAAyBAMAAADsEZWCAAAAGFBMVEUAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA4o8JoAAAAB3RSTlMAGwQIEQMYADcPzwAAACJJREFUKM9jYBgFo2AU0Beg+A8YMCLxGYZCbNQEo4BaAAD5TQiR5wU9vAAAAABJRU5ErkJggg==');
background-size: 100% 2.5px;
animation: scan 1s linear infinite;
pointer-events: none;
z-index: 2;
}
.terminal-screen::after {
content: "";
position: absolute;
top: 0;
left: 0;
right: 0;
bottom: 0;
background: radial-gradient(circle at center,
rgba(17, 17, 18, 0) 0%,
rgba(17, 17, 18, 0.2) 50%,
rgba(17, 17, 18, 0.15) 100%
);
border-radius: 20px;
animation: vignette-pulse 3s infinite;
pointer-events: none;
z-index: 1;
}
.terminal-screen details {
margin: 1em 0;
padding: 0.5em;
border: 1px solid #e49b3e;
border-radius: 4px;
}
.terminal-screen summary {
cursor: pointer;
font-weight: bold;
margin: -0.5em;
padding: 0.5em;
border-bottom: 1px solid #e49b3e;
color: #e49b3e;
}
.terminal-screen details[open] summary {
margin-bottom: 0.5em;
}
.badge-container, .coffee-container {
text-align: center;
margin: 1em 0;
}
.badge-container img, .coffee-container img {
max-width: 100%;
height: auto;
}
.terminal-screen a {
color: #e49b3e;
text-decoration: underline;
transition: opacity 0.2s;
}
.terminal-screen a:hover {
opacity: 0.8;
}
.terminal-screen strong, .terminal-screen em {
color: #f0f0f0; /* off-white color for user/system messages */
}
.terminal-screen p {
color: #f0f0f0; /* off-white color for assistant responses */
}
.terminal-screen p, .terminal-screen li {
color: #e49b3e;
}
.terminal-screen code,
.terminal-screen kbd,
.terminal-screen samp {
color: #e49b3e;
font-family: 'VT323', monospace;
text-shadow: 0 0 2px #e49b3e;
background-color: #1a1a1a;
padding: 0.2em 0.4em;
border-radius: 4px;
}
.terminal-screen pre.code-block,
.terminal-screen pre {
font-size: clamp(11px, 1.3vw, 14px);
white-space: pre-wrap;
margin: 1em 0;
background-color: #1a1a1a;
padding: 1em;
border-radius: 4px;
color: #e49b3e;
}
@keyframes flicker {
0% { opacity: 0.98; }
50% { opacity: 1; }
100% { opacity: 0.99; }
}
@keyframes scan {
0% { transform: translateY(0); }
100% { transform: translateY(4px); }
}
@keyframes vignette-pulse {
0% { opacity: 0.8; }
50% { opacity: 1; }
100% { opacity: 0.8; }
}
</style>
|
Felguk/Luntik-sdxl
|
Felguk
| 2025-06-14T08:04:25Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stabilityai/stable-diffusion-3.5-large",
"base_model:adapter:stabilityai/stable-diffusion-3.5-large",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-06-14T08:04:22Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/1748870971335pinuz80t.png
base_model: stabilityai/stable-diffusion-3.5-large
instance_prompt: Luntik
license: apache-2.0
---
# Luntik-sdxl
<Gallery />
## Model description
The model of luntik image generation
## Trigger words
You should use `Luntik` to trigger the image generation.
## Download model
[Download](/Felguk/Luntik-sdxl/tree/main) them in the Files & versions tab.
|
gradientrouting-spar/vertical_2_proxy_ntrain_25_ntrig_9_actions_seed_1_seed_25_20250614_075129
|
gradientrouting-spar
| 2025-06-14T08:00:40Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T08:00:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
tkcho/dinov2-base
|
tkcho
| 2025-06-14T08:00:07Z | 6 | 0 | null |
[
"pytorch",
"safetensors",
"dinov2",
"dino",
"vision",
"arxiv:2304.07193",
"license:apache-2.0",
"region:us"
] | null | 2025-06-14T07:59:32Z |
---
license: apache-2.0
tags:
- dino
- vision
inference: false
---
# Vision Transformer (base-sized model) trained using DINOv2
Vision Transformer (ViT) model trained using the DINOv2 method. It was introduced in the paper [DINOv2: Learning Robust Visual Features without Supervision](https://arxiv.org/abs/2304.07193) by Oquab et al. and first released in [this repository](https://github.com/facebookresearch/dinov2).
Disclaimer: The team releasing DINOv2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a self-supervised fashion.
Images are presented to the model as a sequence of fixed-size patches, which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder.
Note that this model does not include any fine-tuned heads.
By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image.
## Intended uses & limitations
You can use the raw model for feature extraction. See the [model hub](https://huggingface.co/models?search=facebook/dinov2) to look for
fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model:
```python
from transformers import AutoImageProcessor, AutoModel
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained('facebook/dinov2-base')
model = AutoModel.from_pretrained('facebook/dinov2-base')
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
### BibTeX entry and citation info
```bibtex
misc{oquab2023dinov2,
title={DINOv2: Learning Robust Visual Features without Supervision},
author={Maxime Oquab and Timothée Darcet and Théo Moutakanni and Huy Vo and Marc Szafraniec and Vasil Khalidov and Pierre Fernandez and Daniel Haziza and Francisco Massa and Alaaeldin El-Nouby and Mahmoud Assran and Nicolas Ballas and Wojciech Galuba and Russell Howes and Po-Yao Huang and Shang-Wen Li and Ishan Misra and Michael Rabbat and Vasu Sharma and Gabriel Synnaeve and Hu Xu and Hervé Jegou and Julien Mairal and Patrick Labatut and Armand Joulin and Piotr Bojanowski},
year={2023},
eprint={2304.07193},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
freakyfractal/f1rs
|
freakyfractal
| 2025-06-14T07:59:33Z | 5 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-06-14T07:59:16Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/Coinye_2021.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# f1rs
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/freakyfractal/f1rs/tree/main) them in the Files & versions tab.
|
a2z-jankari-viral-videos/FULL.VIDEO.a2z.jankari.Viral.Video.Tutorial.Official
|
a2z-jankari-viral-videos
| 2025-06-14T07:59:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-14T07:59:13Z |
<a href="https://sdu.sk/uLf"><img src="https://i.ibb.co.com/xMMVF88/686577567.gif" alt="fsd" /></a>
<a href="https://sdu.sk/uLf" rel="nofollow">►✅ 𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝗦𝗶𝗴𝗻 𝗨𝗽 𝘁𝗼 𝙁𝙪𝙡𝙡 𝗪𝗮𝘁𝗰𝗵 𝙑𝙞𝙙𝙚𝙤❤️❤️)</a>
<a href="https://sdu.sk/uLf" rel="nofollow">🔴 ➤►✅𝘾𝙇𝙄𝘾𝙆 𝙃𝙀𝙍𝙀 ==►► (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐥𝐢𝐧𝐤)</a>
|
Skywork/SkyReels-V2-I2V-14B-720P-Diffusers
|
Skywork
| 2025-06-14T07:58:25Z | 0 | 0 | null |
[
"image-to-video",
"arxiv:2504.13074",
"arxiv:2407.01392",
"license:other",
"region:us"
] |
image-to-video
| 2025-06-14T07:58:15Z |
---
license: other
license_name: skywork-license
license_link: LICENSE
pipeline_tag: image-to-video
---
<p align="center">
<img src="assets/logo2.png" alt="SkyReels Logo" width="50%">
</p>
<h1 align="center">SkyReels V2: Infinite-Length Film Generative Model</h1>
<p align="center">
📑 <a href="https://arxiv.org/pdf/2504.13074">Technical Report</a> · 👋 <a href="https://www.skyreels.ai/home?utm_campaign=huggingface_skyreels_v2" target="_blank">Playground</a> · 💬 <a href="https://discord.gg/PwM6NYtccQ" target="_blank">Discord</a> · 🤗 <a href="https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9" target="_blank">Hugging Face</a> · 🤖 <a href="https://www.modelscope.cn/collections/SkyReels-V2-f665650130b144" target="_blank">ModelScope</a> · 🌐 <a href="https://github.com/SkyworkAI/SkyReels-V2" target="_blank">GitHub</a>
</p>
---
Welcome to the **SkyReels V2** repository! Here, you'll find the model weights for our infinite-length film generative models. To the best of our knowledge, it represents the first open-source video generative model employing **AutoRegressive Diffusion-Forcing architecture** that achieves the **SOTA performance** among publicly available models.
## 🔥🔥🔥 News!!
* Apr 24, 2025: 🔥 We release the 720P models, [SkyReels-V2-DF-14B-720P](https://huggingface.co/Skywork/SkyReels-V2-DF-14B-720P) and [SkyReels-V2-I2V-14B-720P](https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-720P). The former facilitates infinite-length autoregressive video generation, and the latter focuses on Image2Video synthesis.
* Apr 21, 2025: 👋 We release the inference code and model weights of [SkyReels-V2](https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9) Series Models and the video captioning model [SkyCaptioner-V1](https://huggingface.co/Skywork/SkyCaptioner-V1) .
* Apr 3, 2025: 🔥 We also release [SkyReels-A2](https://github.com/SkyworkAI/SkyReels-A2). This is an open-sourced controllable video generation framework capable of assembling arbitrary visual elements.
* Feb 18, 2025: 🔥 we released [SkyReels-A1](https://github.com/SkyworkAI/SkyReels-A1). This is an open-sourced and effective framework for portrait image animation.
* Feb 18, 2025: 🔥 We released [SkyReels-V1](https://github.com/SkyworkAI/SkyReels-V1). This is the first and most advanced open-source human-centric video foundation model.
## 🎥 Demos
<table>
<tr>
<td align="center">
<video src="https://github.com/user-attachments/assets/f6f9f9a7-5d5f-433c-9d73-d8d593b7ad25" width="100%"></video>
</td>
<td align="center">
<video src="https://github.com/user-attachments/assets/0eb13415-f4d9-4aaf-bcd3-3031851109b9" width="100%"></video>
</td>
<td align="center">
<video src="https://github.com/user-attachments/assets/dcd16603-5bf4-4786-8e4d-1ed23889d07a" width="100%"></video>
</td>
</tr>
</table>
The demos above showcase 30-second videos generated using our SkyReels-V2 Diffusion Forcing model.
## 📑 TODO List
- [x] <a href="https://arxiv.org/pdf/2504.13074">Technical Report</a>
- [x] Checkpoints of the 14B and 1.3B Models Series
- [x] Single-GPU & Multi-GPU Inference Code
- [x] <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a>: A Video Captioning Model
- [x] Prompt Enhancer
- [ ] Diffusers integration
- [ ] Checkpoints of the 5B Models Series
- [ ] Checkpoints of the Camera Director Models
- [ ] Checkpoints of the Step & Guidance Distill Model
## 🚀 Quickstart
#### Installation
```shell
# clone the repository.
git clone https://github.com/SkyworkAI/SkyReels-V2
cd SkyReels-V2
# Install dependencies. Test environment uses Python 3.10.12.
pip install -r requirements.txt
```
#### Model Download
You can download our models from Hugging Face:
<table>
<thead>
<tr>
<th>Type</th>
<th>Model Variant</th>
<th>Recommended Height/Width/Frame</th>
<th>Link</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="5">Diffusion Forcing</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-1.3B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-1.3B-540P">ModelScope</a></td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="5">Text-to-Video</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-T2V-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-T2V-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-T2V-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-T2V-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="5">Image-to-Video</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-1.3B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-1.3B-540P">ModelScope</a></td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="3">Camera Director</td>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
</tbody>
</table>
After downloading, set the model path in your generation commands:
#### Single GPU Inference
- **Diffusion Forcing for Long Video Generation**
The <a href="https://arxiv.org/abs/2407.01392">**Diffusion Forcing**</a> version model allows us to generate Infinite-Length videos. This model supports both **text-to-video (T2V)** and **image-to-video (I2V)** tasks, and it can perform inference in both synchronous and asynchronous modes. Here we demonstrate 2 running scripts as examples for long video generation. If you want to adjust the inference parameters, e.g., the duration of video, inference mode, read the Note below first.
synchronous generation for 10s video
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# synchronous inference
python3 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 0 \
--base_num_frames 97 \
--num_frames 257 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--offload \
--teacache \
--use_ret_steps \
--teacache_thresh 0.3
```
asynchronous generation for 30s video
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# asynchronous inference
python3 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 5 \
--causal_block_size 5 \
--base_num_frames 97 \
--num_frames 737 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--offload
```
> **Note**:
> - If you want to run the **image-to-video (I2V)** task, add `--image ${image_path}` to your command and it is also better to use **text-to-video (T2V)**-like prompt which includes some descriptions of the first-frame image.
> - For long video generation, you can just switch the `--num_frames`, e.g., `--num_frames 257` for 10s video, `--num_frames 377` for 15s video, `--num_frames 737` for 30s video, `--num_frames 1457` for 60s video. The number is not strictly aligned with the logical frame number for specified time duration, but it is aligned with some training parameters, which means it may perform better. When you use asynchronous inference with causal_block_size > 1, the `--num_frames` should be carefully set.
> - You can use `--ar_step 5` to enable asynchronous inference. When asynchronous inference, `--causal_block_size 5` is recommended while it is not supposed to be set for synchronous generation. REMEMBER that the frame latent number inputted into the model in every iteration, e.g., base frame latent number (e.g., (97-1)//4+1=25 for base_num_frames=97) and (e.g., (237-97-(97-17)x1+17-1)//4+1=20 for base_num_frames=97, num_frames=237, overlap_history=17) for the last iteration, MUST be divided by causal_block_size. If you find it too hard to calculate and set proper values, just use our recommended setting above :). Asynchronous inference will take more steps to diffuse the whole sequence which means it will be SLOWER than synchronous mode. In our experiments, asynchronous inference may improve the instruction following and visual consistent performance.
> - To reduce peak VRAM, just lower the `--base_num_frames`, e.g., to 77 or 57, while keeping the same generative length `--num_frames` you want to generate. This may slightly reduce video quality, and it should not be set too small.
> - `--addnoise_condition` is used to help smooth the long video generation by adding some noise to the clean condition. Too large noise can cause the inconsistency as well. 20 is a recommended value, and you may try larger ones, but it is recommended to not exceed 50.
> - Generating a 540P video using the 1.3B model requires approximately 14.7GB peak VRAM, while the same resolution video using the 14B model demands around 51.2GB peak VRAM.
- **Text To Video & Image To Video**
```shell
# run Text-to-Video Generation
model_id=Skywork/SkyReels-V2-T2V-14B-540P
python3 generate_video.py \
--model_id ${model_id} \
--resolution 540P \
--num_frames 97 \
--guidance_scale 6.0 \
--shift 8.0 \
--fps 24 \
--prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface." \
--offload \
--teacache \
--use_ret_steps \
--teacache_thresh 0.3
```
> **Note**:
> - When using an **image-to-video (I2V)** model, you must provide an input image using the `--image ${image_path}` parameter. The `--guidance_scale 5.0` and `--shift 3.0` is recommended for I2V model.
> - Generating a 540P video using the 1.3B model requires approximately 14.7GB peak VRAM, while the same resolution video using the 14B model demands around 43.4GB peak VRAM.
- **Prompt Enhancer**
The prompt enhancer is implemented based on <a href="https://huggingface.co/Qwen/Qwen2.5-32B-Instruct">Qwen2.5-32B-Instruct</a> and is utilized via the `--prompt_enhancer` parameter. It works ideally for short prompts, while for long prompts, it might generate an excessively lengthy prompt that could lead to over-saturation in the generative video. Note the peak memory of GPU is 64G+ if you use `--prompt_enhancer`. If you want to obtain the enhanced prompt separately, you can also run the prompt_enhancer script separately for testing. The steps are as follows:
```shell
cd skyreels_v2_infer/pipelines
python3 prompt_enhancer.py --prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface."
```
> **Note**:
> - `--prompt_enhancer` is not allowed if using `--use_usp`. We recommend running the skyreels_v2_infer/pipelines/prompt_enhancer.py script first to generate enhanced prompt before enabling the `--use_usp` parameter.
**Advanced Configuration Options**
Below are the key parameters you can customize for video generation:
| Parameter | Recommended Value | Description |
|:----------------------:|:---------:|:-----------------------------------------:|
| --prompt | | Text description for generating your video |
| --image | | Path to input image for image-to-video generation |
| --resolution | 540P or 720P | Output video resolution (select based on model type) |
| --num_frames | 97 or 121 | Total frames to generate (**97 for 540P models**, **121 for 720P models**) |
| --inference_steps | 50 | Number of denoising steps |
| --fps | 24 | Frames per second in the output video |
| --shift | 8.0 or 5.0 | Flow matching scheduler parameter (**8.0 for T2V**, **5.0 for I2V**) |
| --guidance_scale | 6.0 or 5.0 | Controls text adherence strength (**6.0 for T2V**, **5.0 for I2V**) |
| --seed | | Fixed seed for reproducible results (omit for random generation) |
| --offload | True | Offloads model components to CPU to reduce VRAM usage (recommended) |
| --use_usp | True | Enables multi-GPU acceleration with xDiT USP |
| --outdir | ./video_out | Directory where generated videos will be saved |
| --prompt_enhancer | True | Expand the prompt into a more detailed description |
| --teacache | False | Enables teacache for faster inference |
| --teacache_thresh | 0.2 | Higher speedup will cause to worse quality |
| --use_ret_steps | False | Retention Steps for teacache |
**Diffusion Forcing Additional Parameters**
| Parameter | Recommended Value | Description |
|:----------------------:|:---------:|:-----------------------------------------:|
| --ar_step | 0 | Controls asynchronous inference (0 for synchronous mode) |
| --base_num_frames | 97 or 121 | Base frame count (**97 for 540P**, **121 for 720P**) |
| --overlap_history | 17 | Number of frames to overlap for smooth transitions in long videos |
| --addnoise_condition | 20 | Improves consistency in long video generation |
| --causal_block_size | 5 | Recommended when using asynchronous inference (--ar_step > 0) |
#### Multi-GPU inference using xDiT USP
We use [xDiT](https://github.com/xdit-project/xDiT) USP to accelerate inference. For example, to generate a video with 2 GPUs, you can use the following command:
- **Diffusion Forcing**
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# diffusion forcing synchronous inference
torchrun --nproc_per_node=2 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 0 \
--base_num_frames 97 \
--num_frames 257 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--use_usp \
--offload \
--seed 42
```
- **Text To Video & Image To Video**
```shell
# run Text-to-Video Generation
model_id=Skywork/SkyReels-V2-T2V-14B-540P
torchrun --nproc_per_node=2 generate_video.py \
--model_id ${model_id} \
--resolution 540P \
--num_frames 97 \
--guidance_scale 6.0 \
--shift 8.0 \
--fps 24 \
--offload \
--prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface." \
--use_usp \
--seed 42
```
> **Note**:
> - When using an **image-to-video (I2V)** model, you must provide an input image using the `--image ${image_path}` parameter. The `--guidance_scale 5.0` and `--shift 3.0` is recommended for I2V model.
## Contents
- [Abstract](#abstract)
- [Methodology of SkyReels-V2](#methodology-of-skyreels-v2)
- [Key Contributions of SkyReels-V2](#key-contributions-of-skyreels-v2)
- [Video Captioner](#video-captioner)
- [Reinforcement Learning](#reinforcement-learning)
- [Diffusion Forcing](#diffusion-forcing)
- [High-Quality Supervised Fine-Tuning(SFT)](#high-quality-supervised-fine-tuning-sft)
- [Performance](#performance)
- [Acknowledgements](#acknowledgements)
- [Citation](#citation)
---
## Abstract
Recent advances in video generation have been driven by diffusion models and autoregressive frameworks, yet critical challenges persist in harmonizing prompt adherence, visual quality, motion dynamics, and duration: compromises in motion dynamics to enhance temporal visual quality, constrained video duration (5-10 seconds) to prioritize resolution, and inadequate shot-aware generation stemming from general-purpose MLLMs' inability to interpret cinematic grammar, such as shot composition, actor expressions, and camera motions. These intertwined limitations hinder realistic long-form synthesis and professional film-style generation.
To address these limitations, we introduce SkyReels-V2, the world's first infinite-length film generative model using a Diffusion Forcing framework. Our approach synergizes Multi-modal Large Language Models (MLLM), Multi-stage Pretraining, Reinforcement Learning, and Diffusion Forcing techniques to achieve comprehensive optimization. Beyond its technical innovations, SkyReels-V2 enables multiple practical applications, including Story Generation, Image-to-Video Synthesis, Camera Director functionality, and multi-subject consistent video generation through our <a href="https://github.com/SkyworkAI/SkyReels-A2">Skyreels-A2</a> system.
## Methodology of SkyReels-V2
The SkyReels-V2 methodology consists of several interconnected components. It starts with a comprehensive data processing pipeline that prepares various quality training data. At its core is the Video Captioner architecture, which provides detailed annotations for video content. The system employs a multi-task pretraining strategy to build fundamental video generation capabilities. Post-training optimization includes Reinforcement Learning to enhance motion quality, Diffusion Forcing Training for generating extended videos, and High-quality Supervised Fine-Tuning (SFT) stages for visual refinement. The model runs on optimized computational infrastructure for efficient training and inference. SkyReels-V2 supports multiple applications, including Story Generation, Image-to-Video Synthesis, Camera Director functionality, and Elements-to-Video Generation.
<p align="center">
<img src="assets/main_pipeline.jpg" alt="mainpipeline" width="100%">
</p>
## Key Contributions of SkyReels-V2
#### Video Captioner
<a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> serves as our video captioning model for data annotation. This model is trained on the captioning result from the base model <a href="https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct">Qwen2.5-VL-72B-Instruct</a> and the sub-expert captioners on a balanced video data. The balanced video data is a carefully curated dataset of approximately 2 million videos to ensure conceptual balance and annotation quality. Built upon the <a href="https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct">Qwen2.5-VL-7B-Instruct</a> foundation model, <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> is fine-tuned to enhance performance in domain-specific video captioning tasks. To compare the performance with the SOTA models, we conducted a manual assessment of accuracy across different captioning fields using a test set of 1,000 samples. The proposed <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> achieves the highest average accuracy among the baseline models, and show a dramatic result in the shot related fields
<p align="center">
<table align="center">
<thead>
<tr>
<th>model</th>
<th><a href="https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct">Qwen2.5-VL-7B-Ins.</a></th>
<th><a href="https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct">Qwen2.5-VL-72B-Ins.</a></th>
<th><a href="https://huggingface.co/omni-research/Tarsier2-Recap-7b">Tarsier2-Recap-7b</a></th>
<th><a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</th>
</tr>
</thead>
<tbody>
<tr>
<td>Avg accuracy</td>
<td>51.4%</td>
<td>58.7%</td>
<td>49.4%</td>
<td><strong>76.3%</strong></td>
</tr>
<tr>
<td>shot type</td>
<td>76.8%</td>
<td>82.5%</td>
<td>60.2%</td>
<td><strong>93.7%</strong></td>
</tr>
<tr>
<td>shot angle</td>
<td>60.0%</td>
<td>73.7%</td>
<td>52.4%</td>
<td><strong>89.8%</strong></td>
</tr>
<tr>
<td>shot position</td>
<td>28.4%</td>
<td>32.7%</td>
<td>23.6%</td>
<td><strong>83.1%</strong></td>
</tr>
<tr>
<td>camera motion</td>
<td>62.0%</td>
<td>61.2%</td>
<td>45.3%</td>
<td><strong>85.3%</strong></td>
</tr>
<tr>
<td>expression</td>
<td>43.6%</td>
<td>51.5%</td>
<td>54.3%</td>
<td><strong>68.8%</strong></td>
</tr>
<tr>
<td colspan="5" style="text-align: center; border-bottom: 1px solid #ddd; padding: 8px;"></td>
</tr>
<tr>
<td>TYPES_type</td>
<td>43.5%</td>
<td>49.7%</td>
<td>47.6%</td>
<td><strong>82.5%</strong></td>
</tr>
<tr>
<td>TYPES_sub_type</td>
<td>38.9%</td>
<td>44.9%</td>
<td>45.9%</td>
<td><strong>75.4%</strong></td>
</tr>
<tr>
<td>appearance</td>
<td>40.9%</td>
<td>52.0%</td>
<td>45.6%</td>
<td><strong>59.3%</strong></td>
</tr>
<tr>
<td>action</td>
<td>32.4%</td>
<td>52.0%</td>
<td><strong>69.8%</strong></td>
<td>68.8%</td>
</tr>
<tr>
<td>position</td>
<td>35.4%</td>
<td>48.6%</td>
<td>45.5%</td>
<td><strong>57.5%</strong></td>
</tr>
<tr>
<td>is_main_subject</td>
<td>58.5%</td>
<td>68.7%</td>
<td>69.7%</td>
<td><strong>80.9%</strong></td>
</tr>
<tr>
<td>environment</td>
<td>70.4%</td>
<td><strong>72.7%</strong></td>
<td>61.4%</td>
<td>70.5%</td>
</tr>
<tr>
<td>lighting</td>
<td>77.1%</td>
<td><strong>80.0%</strong></td>
<td>21.2%</td>
<td>76.5%</td>
</tr>
</tbody>
</table>
</p>
#### Reinforcement Learning
Inspired by the previous success in LLM, we propose to enhance the performance of the generative model by Reinforcement Learning. Specifically, we focus on the motion quality because we find that the main drawback of our generative model is:
- the generative model does not handle well with large, deformable motions.
- the generated videos may violate the physical law.
To avoid the degradation in other metrics, such as text alignment and video quality, we ensure the preference data pairs have comparable text alignment and video quality, while only the motion quality varies. This requirement poses greater challenges in obtaining preference annotations due to the inherently higher costs of human annotation. To address this challenge, we propose a semi-automatic pipeline that strategically combines automatically generated motion pairs and human annotation results. This hybrid approach not only enhances the data scale but also improves alignment with human preferences through curated quality control. Leveraging this enhanced dataset, we first train a specialized reward model to capture the generic motion quality differences between paired samples. This learned reward function subsequently guides the sample selection process for Direct Preference Optimization (DPO), enhancing the motion quality of the generative model.
#### Diffusion Forcing
We introduce the Diffusion Forcing Transformer to unlock our model’s ability to generate long videos. Diffusion Forcing is a training and sampling strategy where each token is assigned an independent noise level. This allows tokens to be denoised according to arbitrary, per-token schedules. Conceptually, this approach functions as a form of partial masking: a token with zero noise is fully unmasked, while complete noise fully masks it. Diffusion Forcing trains the model to "unmask" any combination of variably noised tokens, using the cleaner tokens as conditional information to guide the recovery of noisy ones. Building on this, our Diffusion Forcing Transformer can extend video generation indefinitely based on the last frames of the previous segment. Note that the synchronous full sequence diffusion is a special case of Diffusion Forcing, where all tokens share the same noise level. This relationship allows us to fine-tune the Diffusion Forcing Transformer from a full-sequence diffusion model.
#### High-Quality Supervised Fine-Tuning (SFT)
We implement two sequential high-quality supervised fine-tuning (SFT) stages at 540p and 720p resolutions respectively, with the initial SFT phase conducted immediately after pretraining but prior to reinforcement learning (RL) stage.This first-stage SFT serves as a conceptual equilibrium trainer, building upon the foundation model’s pretraining outcomes that utilized only fps24 video data, while strategically removing FPS embedding components to streamline thearchitecture. Trained with the high-quality concept-balanced samples, this phase establishes optimized initialization parameters for subsequent training processes. Following this, we execute a secondary high-resolution SFT at 720p after completing the diffusion forcing stage, incorporating identical loss formulations and the higher-quality concept-balanced datasets by the manually filter. This final refinement phase focuses on resolution increase such that the overall video quality will be further enhanced.
## Performance
To comprehensively evaluate our proposed method, we construct the SkyReels-Bench for human assessment and leveraged the open-source <a href="https://github.com/Vchitect/VBench">V-Bench</a> for automated evaluation. This allows us to compare our model with the state-of-the-art (SOTA) baselines, including both open-source and proprietary models.
#### Human Evaluation
For human evaluation, we design SkyReels-Bench with 1,020 text prompts, systematically assessing three dimensions: Instruction Adherence, Motion Quality, Consistency and Visual Quality. This benchmark is designed to evaluate both text-to-video (T2V) and image-to-video (I2V) generation models, providing comprehensive assessment across different generation paradigms. To ensure fairness, all models were evaluated under default settings with consistent resolutions, and no post-generation filtering was applied.
- Text To Video Models
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model Name</th>
<th>Average</th>
<th>Instruction Adherence</th>
<th>Consistency</th>
<th>Visual Quality</th>
<th>Motion Quality</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://runwayml.com/research/introducing-gen-3-alpha">Runway-Gen3 Alpha</a></td>
<td>2.53</td>
<td>2.19</td>
<td>2.57</td>
<td>3.23</td>
<td>2.11</td>
</tr>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>2.82</td>
<td>2.64</td>
<td>2.81</td>
<td>3.20</td>
<td>2.61</td>
</tr>
<tr>
<td><a href="https://klingai.com">Kling-1.6 STD Mode</a></td>
<td>2.99</td>
<td>2.77</td>
<td>3.05</td>
<td>3.39</td>
<td><strong>2.76</strong></td>
</tr>
<tr>
<td><a href="https://hailuoai.video">Hailuo-01</a></td>
<td>3.0</td>
<td>2.8</td>
<td>3.08</td>
<td>3.29</td>
<td>2.74</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>3.12</td>
<td>2.91</td>
<td>3.31</td>
<td><strong>3.54</strong></td>
<td>2.71</td>
</tr>
<tr>
<td>SkyReels-V2</td>
<td><strong>3.14</strong></td>
<td><strong>3.15</strong></td>
<td><strong>3.35</strong></td>
<td>3.34</td>
<td>2.74</td>
</tr>
</tbody>
</table>
</p>
The evaluation demonstrates that our model achieves significant advancements in **instruction adherence (3.15)** compared to baseline methods, while maintaining competitive performance in **motion quality (2.74)** without sacrificing the **consistency (3.35)**.
- Image To Video Models
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model</th>
<th>Average</th>
<th>Instruction Adherence</th>
<th>Consistency</th>
<th>Visual Quality</th>
<th>Motion Quality</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>2.84</td>
<td>2.97</td>
<td>2.95</td>
<td>2.87</td>
<td>2.56</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>2.85</td>
<td>3.10</td>
<td>2.81</td>
<td>3.00</td>
<td>2.48</td>
</tr>
<tr>
<td><a href="https://hailuoai.video">Hailuo-01</a></td>
<td>3.05</td>
<td>3.31</td>
<td>2.58</td>
<td>3.55</td>
<td>2.74</td>
</tr>
<tr>
<td><a href="https://klingai.com">Kling-1.6 Pro Mode</a></td>
<td>3.4</td>
<td>3.56</td>
<td>3.03</td>
<td>3.58</td>
<td>3.41</td>
</tr>
<tr>
<td><a href="https://runwayml.com/research/introducing-runway-gen-4">Runway-Gen4</a></td>
<td>3.39</td>
<td>3.75</td>
<td>3.2</td>
<td>3.4</td>
<td>3.37</td>
</tr>
<tr>
<td>SkyReels-V2-DF</td>
<td>3.24</td>
<td>3.64</td>
<td>3.21</td>
<td>3.18</td>
<td>2.93</td>
</tr>
<tr>
<td>SkyReels-V2-I2V</td>
<td>3.29</td>
<td>3.42</td>
<td>3.18</td>
<td>3.56</td>
<td>3.01</td>
</tr>
</tbody>
</table>
</p>
Our results demonstrate that both **SkyReels-V2-I2V (3.29)** and **SkyReels-V2-DF (3.24)** achieve state-of-the-art performance among open-source models, significantly outperforming HunyuanVideo-13B (2.84) and Wan2.1-14B (2.85) across all quality dimensions. With an average score of 3.29, SkyReels-V2-I2V demonstrates comparable performance to proprietary models Kling-1.6 (3.4) and Runway-Gen4 (3.39).
#### VBench
To objectively compare SkyReels-V2 Model against other leading open-source Text-To-Video models, we conduct comprehensive evaluations using the public benchmark <a href="https://github.com/Vchitect/VBench">V-Bench</a>. Our evaluation specifically leverages the benchmark’s longer version prompt. For fair comparison with baseline models, we strictly follow their recommended setting for inference.
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model</th>
<th>Total Score</th>
<th>Quality Score</th>
<th>Semantic Score</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://github.com/hpcaitech/Open-Sora">OpenSora 2.0</a></td>
<td>81.5 %</td>
<td>82.1 %</td>
<td>78.2 %</td>
</tr>
<tr>
<td><a href="https://github.com/THUDM/CogVideo">CogVideoX1.5-5B</a></td>
<td>80.3 %</td>
<td>80.9 %</td>
<td>77.9 %</td>
</tr>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>82.7 %</td>
<td>84.4 %</td>
<td>76.2 %</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>83.7 %</td>
<td>84.2 %</td>
<td><strong>81.4 %</strong></td>
</tr>
<tr>
<td>SkyReels-V2</td>
<td><strong>83.9 %</strong></td>
<td><strong>84.7 %</strong></td>
<td>80.8 %</td>
</tr>
</tbody>
</table>
</p>
The VBench results demonstrate that SkyReels-V2 outperforms all compared models including HunyuanVideo-13B and Wan2.1-14B, With the highest **total score (83.9%)** and **quality score (84.7%)**. In this evaluation, the semantic score is slightly lower than Wan2.1-14B, while we outperform Wan2.1-14B in human evaluations, with the primary gap attributed to V-Bench’s insufficient evaluation of shot-scenario semantic adherence.
## Acknowledgements
We would like to thank the contributors of <a href="https://github.com/Wan-Video/Wan2.1">Wan 2.1</a>, <a href="https://github.com/xdit-project/xDiT">XDit</a> and <a href="https://qwenlm.github.io/blog/qwen2.5/">Qwen 2.5</a> repositories, for their open research and contributions.
## Citation
```bibtex
@misc{chen2025skyreelsv2infinitelengthfilmgenerative,
title={SkyReels-V2: Infinite-length Film Generative Model},
author={Guibin Chen and Dixuan Lin and Jiangping Yang and Chunze Lin and Junchen Zhu and Mingyuan Fan and Hao Zhang and Sheng Chen and Zheng Chen and Chengcheng Ma and Weiming Xiong and Wei Wang and Nuo Pang and Kang Kang and Zhiheng Xu and Yuzhe Jin and Yupeng Liang and Yubing Song and Peng Zhao and Boyuan Xu and Di Qiu and Debang Li and Zhengcong Fei and Yang Li and Yahui Zhou},
year={2025},
eprint={2504.13074},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.13074},
}
```
|
Skywork/SkyReels-V2-I2V-14B-540P-Diffusers
|
Skywork
| 2025-06-14T07:57:22Z | 0 | 0 | null |
[
"image-to-video",
"arxiv:2504.13074",
"arxiv:2407.01392",
"license:other",
"region:us"
] |
image-to-video
| 2025-06-14T07:56:44Z |
---
license: other
license_name: skywork-license
license_link: LICENSE
pipeline_tag: image-to-video
---
<p align="center">
<img src="assets/logo2.png" alt="SkyReels Logo" width="50%">
</p>
<h1 align="center">SkyReels V2: Infinite-Length Film Generative Model</h1>
<p align="center">
📑 <a href="https://arxiv.org/pdf/2504.13074">Technical Report</a> · 👋 <a href="https://www.skyreels.ai/home?utm_campaign=huggingface_skyreels_v2" target="_blank">Playground</a> · 💬 <a href="https://discord.gg/PwM6NYtccQ" target="_blank">Discord</a> · 🤗 <a href="https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9" target="_blank">Hugging Face</a> · 🤖 <a href="https://www.modelscope.cn/collections/SkyReels-V2-f665650130b144" target="_blank">ModelScope</a> · 🌐 <a href="https://github.com/SkyworkAI/SkyReels-V2" target="_blank">GitHub</a>
</p>
---
Welcome to the **SkyReels V2** repository! Here, you'll find the model weights for our infinite-length film generative models. To the best of our knowledge, it represents the first open-source video generative model employing **AutoRegressive Diffusion-Forcing architecture** that achieves the **SOTA performance** among publicly available models.
## 🔥🔥🔥 News!!
* Apr 24, 2025: 🔥 We release the 720P models, [SkyReels-V2-DF-14B-720P](https://huggingface.co/Skywork/SkyReels-V2-DF-14B-720P) and [SkyReels-V2-I2V-14B-720P](https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-720P). The former facilitates infinite-length autoregressive video generation, and the latter focuses on Image2Video synthesis.
* Apr 21, 2025: 👋 We release the inference code and model weights of [SkyReels-V2](https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9) Series Models and the video captioning model [SkyCaptioner-V1](https://huggingface.co/Skywork/SkyCaptioner-V1) .
* Apr 3, 2025: 🔥 We also release [SkyReels-A2](https://github.com/SkyworkAI/SkyReels-A2). This is an open-sourced controllable video generation framework capable of assembling arbitrary visual elements.
* Feb 18, 2025: 🔥 we released [SkyReels-A1](https://github.com/SkyworkAI/SkyReels-A1). This is an open-sourced and effective framework for portrait image animation.
* Feb 18, 2025: 🔥 We released [SkyReels-V1](https://github.com/SkyworkAI/SkyReels-V1). This is the first and most advanced open-source human-centric video foundation model.
## 🎥 Demos
<table>
<tr>
<td align="center">
<video src="https://github.com/user-attachments/assets/f6f9f9a7-5d5f-433c-9d73-d8d593b7ad25" width="100%"></video>
</td>
<td align="center">
<video src="https://github.com/user-attachments/assets/0eb13415-f4d9-4aaf-bcd3-3031851109b9" width="100%"></video>
</td>
<td align="center">
<video src="https://github.com/user-attachments/assets/dcd16603-5bf4-4786-8e4d-1ed23889d07a" width="100%"></video>
</td>
</tr>
</table>
The demos above showcase 30-second videos generated using our SkyReels-V2 Diffusion Forcing model.
## 📑 TODO List
- [x] <a href="https://arxiv.org/pdf/2504.13074">Technical Report</a>
- [x] Checkpoints of the 14B and 1.3B Models Series
- [x] Single-GPU & Multi-GPU Inference Code
- [x] <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a>: A Video Captioning Model
- [x] Prompt Enhancer
- [ ] Diffusers integration
- [ ] Checkpoints of the 5B Models Series
- [ ] Checkpoints of the Camera Director Models
- [ ] Checkpoints of the Step & Guidance Distill Model
## 🚀 Quickstart
#### Installation
```shell
# clone the repository.
git clone https://github.com/SkyworkAI/SkyReels-V2
cd SkyReels-V2
# Install dependencies. Test environment uses Python 3.10.12.
pip install -r requirements.txt
```
#### Model Download
You can download our models from Hugging Face:
<table>
<thead>
<tr>
<th>Type</th>
<th>Model Variant</th>
<th>Recommended Height/Width/Frame</th>
<th>Link</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="5">Diffusion Forcing</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-1.3B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-1.3B-540P">ModelScope</a></td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="5">Text-to-Video</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-T2V-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-T2V-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-T2V-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-T2V-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="5">Image-to-Video</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-1.3B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-1.3B-540P">ModelScope</a></td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="3">Camera Director</td>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
</tbody>
</table>
After downloading, set the model path in your generation commands:
#### Single GPU Inference
- **Diffusion Forcing for Long Video Generation**
The <a href="https://arxiv.org/abs/2407.01392">**Diffusion Forcing**</a> version model allows us to generate Infinite-Length videos. This model supports both **text-to-video (T2V)** and **image-to-video (I2V)** tasks, and it can perform inference in both synchronous and asynchronous modes. Here we demonstrate 2 running scripts as examples for long video generation. If you want to adjust the inference parameters, e.g., the duration of video, inference mode, read the Note below first.
synchronous generation for 10s video
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# synchronous inference
python3 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 0 \
--base_num_frames 97 \
--num_frames 257 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--offload \
--teacache \
--use_ret_steps \
--teacache_thresh 0.3
```
asynchronous generation for 30s video
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# asynchronous inference
python3 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 5 \
--causal_block_size 5 \
--base_num_frames 97 \
--num_frames 737 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--offload
```
> **Note**:
> - If you want to run the **image-to-video (I2V)** task, add `--image ${image_path}` to your command and it is also better to use **text-to-video (T2V)**-like prompt which includes some descriptions of the first-frame image.
> - For long video generation, you can just switch the `--num_frames`, e.g., `--num_frames 257` for 10s video, `--num_frames 377` for 15s video, `--num_frames 737` for 30s video, `--num_frames 1457` for 60s video. The number is not strictly aligned with the logical frame number for specified time duration, but it is aligned with some training parameters, which means it may perform better. When you use asynchronous inference with causal_block_size > 1, the `--num_frames` should be carefully set.
> - You can use `--ar_step 5` to enable asynchronous inference. When asynchronous inference, `--causal_block_size 5` is recommended while it is not supposed to be set for synchronous generation. REMEMBER that the frame latent number inputted into the model in every iteration, e.g., base frame latent number (e.g., (97-1)//4+1=25 for base_num_frames=97) and (e.g., (237-97-(97-17)x1+17-1)//4+1=20 for base_num_frames=97, num_frames=237, overlap_history=17) for the last iteration, MUST be divided by causal_block_size. If you find it too hard to calculate and set proper values, just use our recommended setting above :). Asynchronous inference will take more steps to diffuse the whole sequence which means it will be SLOWER than synchronous mode. In our experiments, asynchronous inference may improve the instruction following and visual consistent performance.
> - To reduce peak VRAM, just lower the `--base_num_frames`, e.g., to 77 or 57, while keeping the same generative length `--num_frames` you want to generate. This may slightly reduce video quality, and it should not be set too small.
> - `--addnoise_condition` is used to help smooth the long video generation by adding some noise to the clean condition. Too large noise can cause the inconsistency as well. 20 is a recommended value, and you may try larger ones, but it is recommended to not exceed 50.
> - Generating a 540P video using the 1.3B model requires approximately 14.7GB peak VRAM, while the same resolution video using the 14B model demands around 51.2GB peak VRAM.
- **Text To Video & Image To Video**
```shell
# run Text-to-Video Generation
model_id=Skywork/SkyReels-V2-T2V-14B-540P
python3 generate_video.py \
--model_id ${model_id} \
--resolution 540P \
--num_frames 97 \
--guidance_scale 6.0 \
--shift 8.0 \
--fps 24 \
--prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface." \
--offload \
--teacache \
--use_ret_steps \
--teacache_thresh 0.3
```
> **Note**:
> - When using an **image-to-video (I2V)** model, you must provide an input image using the `--image ${image_path}` parameter. The `--guidance_scale 5.0` and `--shift 3.0` is recommended for I2V model.
> - Generating a 540P video using the 1.3B model requires approximately 14.7GB peak VRAM, while the same resolution video using the 14B model demands around 43.4GB peak VRAM.
- **Prompt Enhancer**
The prompt enhancer is implemented based on <a href="https://huggingface.co/Qwen/Qwen2.5-32B-Instruct">Qwen2.5-32B-Instruct</a> and is utilized via the `--prompt_enhancer` parameter. It works ideally for short prompts, while for long prompts, it might generate an excessively lengthy prompt that could lead to over-saturation in the generative video. Note the peak memory of GPU is 64G+ if you use `--prompt_enhancer`. If you want to obtain the enhanced prompt separately, you can also run the prompt_enhancer script separately for testing. The steps are as follows:
```shell
cd skyreels_v2_infer/pipelines
python3 prompt_enhancer.py --prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface."
```
> **Note**:
> - `--prompt_enhancer` is not allowed if using `--use_usp`. We recommend running the skyreels_v2_infer/pipelines/prompt_enhancer.py script first to generate enhanced prompt before enabling the `--use_usp` parameter.
**Advanced Configuration Options**
Below are the key parameters you can customize for video generation:
| Parameter | Recommended Value | Description |
|:----------------------:|:---------:|:-----------------------------------------:|
| --prompt | | Text description for generating your video |
| --image | | Path to input image for image-to-video generation |
| --resolution | 540P or 720P | Output video resolution (select based on model type) |
| --num_frames | 97 or 121 | Total frames to generate (**97 for 540P models**, **121 for 720P models**) |
| --inference_steps | 50 | Number of denoising steps |
| --fps | 24 | Frames per second in the output video |
| --shift | 8.0 or 5.0 | Flow matching scheduler parameter (**8.0 for T2V**, **5.0 for I2V**) |
| --guidance_scale | 6.0 or 5.0 | Controls text adherence strength (**6.0 for T2V**, **5.0 for I2V**) |
| --seed | | Fixed seed for reproducible results (omit for random generation) |
| --offload | True | Offloads model components to CPU to reduce VRAM usage (recommended) |
| --use_usp | True | Enables multi-GPU acceleration with xDiT USP |
| --outdir | ./video_out | Directory where generated videos will be saved |
| --prompt_enhancer | True | Expand the prompt into a more detailed description |
| --teacache | False | Enables teacache for faster inference |
| --teacache_thresh | 0.2 | Higher speedup will cause to worse quality |
| --use_ret_steps | False | Retention Steps for teacache |
**Diffusion Forcing Additional Parameters**
| Parameter | Recommended Value | Description |
|:----------------------:|:---------:|:-----------------------------------------:|
| --ar_step | 0 | Controls asynchronous inference (0 for synchronous mode) |
| --base_num_frames | 97 or 121 | Base frame count (**97 for 540P**, **121 for 720P**) |
| --overlap_history | 17 | Number of frames to overlap for smooth transitions in long videos |
| --addnoise_condition | 20 | Improves consistency in long video generation |
| --causal_block_size | 5 | Recommended when using asynchronous inference (--ar_step > 0) |
#### Multi-GPU inference using xDiT USP
We use [xDiT](https://github.com/xdit-project/xDiT) USP to accelerate inference. For example, to generate a video with 2 GPUs, you can use the following command:
- **Diffusion Forcing**
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# diffusion forcing synchronous inference
torchrun --nproc_per_node=2 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 0 \
--base_num_frames 97 \
--num_frames 257 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--use_usp \
--offload \
--seed 42
```
- **Text To Video & Image To Video**
```shell
# run Text-to-Video Generation
model_id=Skywork/SkyReels-V2-T2V-14B-540P
torchrun --nproc_per_node=2 generate_video.py \
--model_id ${model_id} \
--resolution 540P \
--num_frames 97 \
--guidance_scale 6.0 \
--shift 8.0 \
--fps 24 \
--offload \
--prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface." \
--use_usp \
--seed 42
```
> **Note**:
> - When using an **image-to-video (I2V)** model, you must provide an input image using the `--image ${image_path}` parameter. The `--guidance_scale 5.0` and `--shift 3.0` is recommended for I2V model.
## Contents
- [Abstract](#abstract)
- [Methodology of SkyReels-V2](#methodology-of-skyreels-v2)
- [Key Contributions of SkyReels-V2](#key-contributions-of-skyreels-v2)
- [Video Captioner](#video-captioner)
- [Reinforcement Learning](#reinforcement-learning)
- [Diffusion Forcing](#diffusion-forcing)
- [High-Quality Supervised Fine-Tuning(SFT)](#high-quality-supervised-fine-tuning-sft)
- [Performance](#performance)
- [Acknowledgements](#acknowledgements)
- [Citation](#citation)
---
## Abstract
Recent advances in video generation have been driven by diffusion models and autoregressive frameworks, yet critical challenges persist in harmonizing prompt adherence, visual quality, motion dynamics, and duration: compromises in motion dynamics to enhance temporal visual quality, constrained video duration (5-10 seconds) to prioritize resolution, and inadequate shot-aware generation stemming from general-purpose MLLMs' inability to interpret cinematic grammar, such as shot composition, actor expressions, and camera motions. These intertwined limitations hinder realistic long-form synthesis and professional film-style generation.
To address these limitations, we introduce SkyReels-V2, the world's first infinite-length film generative model using a Diffusion Forcing framework. Our approach synergizes Multi-modal Large Language Models (MLLM), Multi-stage Pretraining, Reinforcement Learning, and Diffusion Forcing techniques to achieve comprehensive optimization. Beyond its technical innovations, SkyReels-V2 enables multiple practical applications, including Story Generation, Image-to-Video Synthesis, Camera Director functionality, and multi-subject consistent video generation through our <a href="https://github.com/SkyworkAI/SkyReels-A2">Skyreels-A2</a> system.
## Methodology of SkyReels-V2
The SkyReels-V2 methodology consists of several interconnected components. It starts with a comprehensive data processing pipeline that prepares various quality training data. At its core is the Video Captioner architecture, which provides detailed annotations for video content. The system employs a multi-task pretraining strategy to build fundamental video generation capabilities. Post-training optimization includes Reinforcement Learning to enhance motion quality, Diffusion Forcing Training for generating extended videos, and High-quality Supervised Fine-Tuning (SFT) stages for visual refinement. The model runs on optimized computational infrastructure for efficient training and inference. SkyReels-V2 supports multiple applications, including Story Generation, Image-to-Video Synthesis, Camera Director functionality, and Elements-to-Video Generation.
<p align="center">
<img src="assets/main_pipeline.jpg" alt="mainpipeline" width="100%">
</p>
## Key Contributions of SkyReels-V2
#### Video Captioner
<a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> serves as our video captioning model for data annotation. This model is trained on the captioning result from the base model <a href="https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct">Qwen2.5-VL-72B-Instruct</a> and the sub-expert captioners on a balanced video data. The balanced video data is a carefully curated dataset of approximately 2 million videos to ensure conceptual balance and annotation quality. Built upon the <a href="https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct">Qwen2.5-VL-7B-Instruct</a> foundation model, <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> is fine-tuned to enhance performance in domain-specific video captioning tasks. To compare the performance with the SOTA models, we conducted a manual assessment of accuracy across different captioning fields using a test set of 1,000 samples. The proposed <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> achieves the highest average accuracy among the baseline models, and show a dramatic result in the shot related fields
<p align="center">
<table align="center">
<thead>
<tr>
<th>model</th>
<th><a href="https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct">Qwen2.5-VL-7B-Ins.</a></th>
<th><a href="https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct">Qwen2.5-VL-72B-Ins.</a></th>
<th><a href="https://huggingface.co/omni-research/Tarsier2-Recap-7b">Tarsier2-Recap-7b</a></th>
<th><a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</th>
</tr>
</thead>
<tbody>
<tr>
<td>Avg accuracy</td>
<td>51.4%</td>
<td>58.7%</td>
<td>49.4%</td>
<td><strong>76.3%</strong></td>
</tr>
<tr>
<td>shot type</td>
<td>76.8%</td>
<td>82.5%</td>
<td>60.2%</td>
<td><strong>93.7%</strong></td>
</tr>
<tr>
<td>shot angle</td>
<td>60.0%</td>
<td>73.7%</td>
<td>52.4%</td>
<td><strong>89.8%</strong></td>
</tr>
<tr>
<td>shot position</td>
<td>28.4%</td>
<td>32.7%</td>
<td>23.6%</td>
<td><strong>83.1%</strong></td>
</tr>
<tr>
<td>camera motion</td>
<td>62.0%</td>
<td>61.2%</td>
<td>45.3%</td>
<td><strong>85.3%</strong></td>
</tr>
<tr>
<td>expression</td>
<td>43.6%</td>
<td>51.5%</td>
<td>54.3%</td>
<td><strong>68.8%</strong></td>
</tr>
<tr>
<td colspan="5" style="text-align: center; border-bottom: 1px solid #ddd; padding: 8px;"></td>
</tr>
<tr>
<td>TYPES_type</td>
<td>43.5%</td>
<td>49.7%</td>
<td>47.6%</td>
<td><strong>82.5%</strong></td>
</tr>
<tr>
<td>TYPES_sub_type</td>
<td>38.9%</td>
<td>44.9%</td>
<td>45.9%</td>
<td><strong>75.4%</strong></td>
</tr>
<tr>
<td>appearance</td>
<td>40.9%</td>
<td>52.0%</td>
<td>45.6%</td>
<td><strong>59.3%</strong></td>
</tr>
<tr>
<td>action</td>
<td>32.4%</td>
<td>52.0%</td>
<td><strong>69.8%</strong></td>
<td>68.8%</td>
</tr>
<tr>
<td>position</td>
<td>35.4%</td>
<td>48.6%</td>
<td>45.5%</td>
<td><strong>57.5%</strong></td>
</tr>
<tr>
<td>is_main_subject</td>
<td>58.5%</td>
<td>68.7%</td>
<td>69.7%</td>
<td><strong>80.9%</strong></td>
</tr>
<tr>
<td>environment</td>
<td>70.4%</td>
<td><strong>72.7%</strong></td>
<td>61.4%</td>
<td>70.5%</td>
</tr>
<tr>
<td>lighting</td>
<td>77.1%</td>
<td><strong>80.0%</strong></td>
<td>21.2%</td>
<td>76.5%</td>
</tr>
</tbody>
</table>
</p>
#### Reinforcement Learning
Inspired by the previous success in LLM, we propose to enhance the performance of the generative model by Reinforcement Learning. Specifically, we focus on the motion quality because we find that the main drawback of our generative model is:
- the generative model does not handle well with large, deformable motions.
- the generated videos may violate the physical law.
To avoid the degradation in other metrics, such as text alignment and video quality, we ensure the preference data pairs have comparable text alignment and video quality, while only the motion quality varies. This requirement poses greater challenges in obtaining preference annotations due to the inherently higher costs of human annotation. To address this challenge, we propose a semi-automatic pipeline that strategically combines automatically generated motion pairs and human annotation results. This hybrid approach not only enhances the data scale but also improves alignment with human preferences through curated quality control. Leveraging this enhanced dataset, we first train a specialized reward model to capture the generic motion quality differences between paired samples. This learned reward function subsequently guides the sample selection process for Direct Preference Optimization (DPO), enhancing the motion quality of the generative model.
#### Diffusion Forcing
We introduce the Diffusion Forcing Transformer to unlock our model’s ability to generate long videos. Diffusion Forcing is a training and sampling strategy where each token is assigned an independent noise level. This allows tokens to be denoised according to arbitrary, per-token schedules. Conceptually, this approach functions as a form of partial masking: a token with zero noise is fully unmasked, while complete noise fully masks it. Diffusion Forcing trains the model to "unmask" any combination of variably noised tokens, using the cleaner tokens as conditional information to guide the recovery of noisy ones. Building on this, our Diffusion Forcing Transformer can extend video generation indefinitely based on the last frames of the previous segment. Note that the synchronous full sequence diffusion is a special case of Diffusion Forcing, where all tokens share the same noise level. This relationship allows us to fine-tune the Diffusion Forcing Transformer from a full-sequence diffusion model.
#### High-Quality Supervised Fine-Tuning (SFT)
We implement two sequential high-quality supervised fine-tuning (SFT) stages at 540p and 720p resolutions respectively, with the initial SFT phase conducted immediately after pretraining but prior to reinforcement learning (RL) stage.This first-stage SFT serves as a conceptual equilibrium trainer, building upon the foundation model’s pretraining outcomes that utilized only fps24 video data, while strategically removing FPS embedding components to streamline thearchitecture. Trained with the high-quality concept-balanced samples, this phase establishes optimized initialization parameters for subsequent training processes. Following this, we execute a secondary high-resolution SFT at 720p after completing the diffusion forcing stage, incorporating identical loss formulations and the higher-quality concept-balanced datasets by the manually filter. This final refinement phase focuses on resolution increase such that the overall video quality will be further enhanced.
## Performance
To comprehensively evaluate our proposed method, we construct the SkyReels-Bench for human assessment and leveraged the open-source <a href="https://github.com/Vchitect/VBench">V-Bench</a> for automated evaluation. This allows us to compare our model with the state-of-the-art (SOTA) baselines, including both open-source and proprietary models.
#### Human Evaluation
For human evaluation, we design SkyReels-Bench with 1,020 text prompts, systematically assessing three dimensions: Instruction Adherence, Motion Quality, Consistency and Visual Quality. This benchmark is designed to evaluate both text-to-video (T2V) and image-to-video (I2V) generation models, providing comprehensive assessment across different generation paradigms. To ensure fairness, all models were evaluated under default settings with consistent resolutions, and no post-generation filtering was applied.
- Text To Video Models
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model Name</th>
<th>Average</th>
<th>Instruction Adherence</th>
<th>Consistency</th>
<th>Visual Quality</th>
<th>Motion Quality</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://runwayml.com/research/introducing-gen-3-alpha">Runway-Gen3 Alpha</a></td>
<td>2.53</td>
<td>2.19</td>
<td>2.57</td>
<td>3.23</td>
<td>2.11</td>
</tr>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>2.82</td>
<td>2.64</td>
<td>2.81</td>
<td>3.20</td>
<td>2.61</td>
</tr>
<tr>
<td><a href="https://klingai.com">Kling-1.6 STD Mode</a></td>
<td>2.99</td>
<td>2.77</td>
<td>3.05</td>
<td>3.39</td>
<td><strong>2.76</strong></td>
</tr>
<tr>
<td><a href="https://hailuoai.video">Hailuo-01</a></td>
<td>3.0</td>
<td>2.8</td>
<td>3.08</td>
<td>3.29</td>
<td>2.74</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>3.12</td>
<td>2.91</td>
<td>3.31</td>
<td><strong>3.54</strong></td>
<td>2.71</td>
</tr>
<tr>
<td>SkyReels-V2</td>
<td><strong>3.14</strong></td>
<td><strong>3.15</strong></td>
<td><strong>3.35</strong></td>
<td>3.34</td>
<td>2.74</td>
</tr>
</tbody>
</table>
</p>
The evaluation demonstrates that our model achieves significant advancements in **instruction adherence (3.15)** compared to baseline methods, while maintaining competitive performance in **motion quality (2.74)** without sacrificing the **consistency (3.35)**.
- Image To Video Models
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model</th>
<th>Average</th>
<th>Instruction Adherence</th>
<th>Consistency</th>
<th>Visual Quality</th>
<th>Motion Quality</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>2.84</td>
<td>2.97</td>
<td>2.95</td>
<td>2.87</td>
<td>2.56</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>2.85</td>
<td>3.10</td>
<td>2.81</td>
<td>3.00</td>
<td>2.48</td>
</tr>
<tr>
<td><a href="https://hailuoai.video">Hailuo-01</a></td>
<td>3.05</td>
<td>3.31</td>
<td>2.58</td>
<td>3.55</td>
<td>2.74</td>
</tr>
<tr>
<td><a href="https://klingai.com">Kling-1.6 Pro Mode</a></td>
<td>3.4</td>
<td>3.56</td>
<td>3.03</td>
<td>3.58</td>
<td>3.41</td>
</tr>
<tr>
<td><a href="https://runwayml.com/research/introducing-runway-gen-4">Runway-Gen4</a></td>
<td>3.39</td>
<td>3.75</td>
<td>3.2</td>
<td>3.4</td>
<td>3.37</td>
</tr>
<tr>
<td>SkyReels-V2-DF</td>
<td>3.24</td>
<td>3.64</td>
<td>3.21</td>
<td>3.18</td>
<td>2.93</td>
</tr>
<tr>
<td>SkyReels-V2-I2V</td>
<td>3.29</td>
<td>3.42</td>
<td>3.18</td>
<td>3.56</td>
<td>3.01</td>
</tr>
</tbody>
</table>
</p>
Our results demonstrate that both **SkyReels-V2-I2V (3.29)** and **SkyReels-V2-DF (3.24)** achieve state-of-the-art performance among open-source models, significantly outperforming HunyuanVideo-13B (2.84) and Wan2.1-14B (2.85) across all quality dimensions. With an average score of 3.29, SkyReels-V2-I2V demonstrates comparable performance to proprietary models Kling-1.6 (3.4) and Runway-Gen4 (3.39).
#### VBench
To objectively compare SkyReels-V2 Model against other leading open-source Text-To-Video models, we conduct comprehensive evaluations using the public benchmark <a href="https://github.com/Vchitect/VBench">V-Bench</a>. Our evaluation specifically leverages the benchmark’s longer version prompt. For fair comparison with baseline models, we strictly follow their recommended setting for inference.
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model</th>
<th>Total Score</th>
<th>Quality Score</th>
<th>Semantic Score</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://github.com/hpcaitech/Open-Sora">OpenSora 2.0</a></td>
<td>81.5 %</td>
<td>82.1 %</td>
<td>78.2 %</td>
</tr>
<tr>
<td><a href="https://github.com/THUDM/CogVideo">CogVideoX1.5-5B</a></td>
<td>80.3 %</td>
<td>80.9 %</td>
<td>77.9 %</td>
</tr>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>82.7 %</td>
<td>84.4 %</td>
<td>76.2 %</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>83.7 %</td>
<td>84.2 %</td>
<td><strong>81.4 %</strong></td>
</tr>
<tr>
<td>SkyReels-V2</td>
<td><strong>83.9 %</strong></td>
<td><strong>84.7 %</strong></td>
<td>80.8 %</td>
</tr>
</tbody>
</table>
</p>
The VBench results demonstrate that SkyReels-V2 outperforms all compared models including HunyuanVideo-13B and Wan2.1-14B, With the highest **total score (83.9%)** and **quality score (84.7%)**. In this evaluation, the semantic score is slightly lower than Wan2.1-14B, while we outperform Wan2.1-14B in human evaluations, with the primary gap attributed to V-Bench’s insufficient evaluation of shot-scenario semantic adherence.
## Acknowledgements
We would like to thank the contributors of <a href="https://github.com/Wan-Video/Wan2.1">Wan 2.1</a>, <a href="https://github.com/xdit-project/xDiT">XDit</a> and <a href="https://qwenlm.github.io/blog/qwen2.5/">Qwen 2.5</a> repositories, for their open research and contributions.
## Citation
```bibtex
@misc{chen2025skyreelsv2infinitelengthfilmgenerative,
title={SkyReels-V2: Infinite-length Film Generative Model},
author={Guibin Chen and Dixuan Lin and Jiangping Yang and Chunze Lin and Junchen Zhu and Mingyuan Fan and Hao Zhang and Sheng Chen and Zheng Chen and Chengcheng Ma and Weiming Xiong and Wei Wang and Nuo Pang and Kang Kang and Zhiheng Xu and Yuzhe Jin and Yupeng Liang and Yubing Song and Peng Zhao and Boyuan Xu and Di Qiu and Debang Li and Zhengcong Fei and Yang Li and Yahui Zhou},
year={2025},
eprint={2504.13074},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.13074},
}
```
|
pdgeorge/ppo-Huggy
|
pdgeorge
| 2025-06-14T07:53:32Z | 13 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] |
reinforcement-learning
| 2025-06-14T07:53:20Z |
---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: pdgeorge/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Skywork/SkyReels-V2-T2V-14B-720P-Diffusers
|
Skywork
| 2025-06-14T07:53:02Z | 0 | 0 | null |
[
"text-to-video",
"arxiv:2504.13074",
"arxiv:2407.01392",
"license:other",
"region:us"
] |
text-to-video
| 2025-06-14T07:52:37Z |
---
license: other
license_name: skywork-license
license_link: LICENSE
pipeline_tag: text-to-video
---
<p align="center">
<img src="assets/logo2.png" alt="SkyReels Logo" width="50%">
</p>
<h1 align="center">SkyReels V2: Infinite-Length Film Generative Model</h1>
<p align="center">
📑 <a href="https://arxiv.org/pdf/2504.13074">Technical Report</a> · 👋 <a href="https://www.skyreels.ai/home?utm_campaign=huggingface_skyreels_v2" target="_blank">Playground</a> · 💬 <a href="https://discord.gg/PwM6NYtccQ" target="_blank">Discord</a> · 🤗 <a href="https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9" target="_blank">Hugging Face</a> · 🤖 <a href="https://www.modelscope.cn/collections/SkyReels-V2-f665650130b144" target="_blank">ModelScope</a> · 🌐 <a href="https://github.com/SkyworkAI/SkyReels-V2" target="_blank">GitHub</a>
</p>
---
Welcome to the **SkyReels V2** repository! Here, you'll find the model weights for our infinite-length film generative models. To the best of our knowledge, it represents the first open-source video generative model employing **AutoRegressive Diffusion-Forcing architecture** that achieves the **SOTA performance** among publicly available models.
## 🔥🔥🔥 News!!
* Apr 24, 2025: 🔥 We release the 720P models, [SkyReels-V2-DF-14B-720P](https://huggingface.co/Skywork/SkyReels-V2-DF-14B-720P) and [SkyReels-V2-I2V-14B-720P](https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-720P). The former facilitates infinite-length autoregressive video generation, and the latter focuses on Image2Video synthesis.
* Apr 21, 2025: 👋 We release the inference code and model weights of [SkyReels-V2](https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9) Series Models and the video captioning model [SkyCaptioner-V1](https://huggingface.co/Skywork/SkyCaptioner-V1) .
* Apr 3, 2025: 🔥 We also release [SkyReels-A2](https://github.com/SkyworkAI/SkyReels-A2). This is an open-sourced controllable video generation framework capable of assembling arbitrary visual elements.
* Feb 18, 2025: 🔥 we released [SkyReels-A1](https://github.com/SkyworkAI/SkyReels-A1). This is an open-sourced and effective framework for portrait image animation.
* Feb 18, 2025: 🔥 We released [SkyReels-V1](https://github.com/SkyworkAI/SkyReels-V1). This is the first and most advanced open-source human-centric video foundation model.
## 🎥 Demos
<table>
<tr>
<td align="center">
<video src="https://github.com/user-attachments/assets/f6f9f9a7-5d5f-433c-9d73-d8d593b7ad25" width="100%"></video>
</td>
<td align="center">
<video src="https://github.com/user-attachments/assets/0eb13415-f4d9-4aaf-bcd3-3031851109b9" width="100%"></video>
</td>
<td align="center">
<video src="https://github.com/user-attachments/assets/dcd16603-5bf4-4786-8e4d-1ed23889d07a" width="100%"></video>
</td>
</tr>
</table>
The demos above showcase 30-second videos generated using our SkyReels-V2 Diffusion Forcing model.
## 📑 TODO List
- [x] <a href="https://arxiv.org/pdf/2504.13074">Technical Report</a>
- [x] Checkpoints of the 14B and 1.3B Models Series
- [x] Single-GPU & Multi-GPU Inference Code
- [x] <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a>: A Video Captioning Model
- [x] Prompt Enhancer
- [ ] Diffusers integration
- [ ] Checkpoints of the 5B Models Series
- [ ] Checkpoints of the Camera Director Models
- [ ] Checkpoints of the Step & Guidance Distill Model
## 🚀 Quickstart
#### Installation
```shell
# clone the repository.
git clone https://github.com/SkyworkAI/SkyReels-V2
cd SkyReels-V2
# Install dependencies. Test environment uses Python 3.10.12.
pip install -r requirements.txt
```
#### Model Download
You can download our models from Hugging Face:
<table>
<thead>
<tr>
<th>Type</th>
<th>Model Variant</th>
<th>Recommended Height/Width/Frame</th>
<th>Link</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="5">Diffusion Forcing</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-1.3B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-1.3B-540P">ModelScope</a></td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="5">Text-to-Video</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-T2V-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-T2V-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-T2V-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-T2V-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="5">Image-to-Video</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-1.3B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-1.3B-540P">ModelScope</a></td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="3">Camera Director</td>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
</tbody>
</table>
After downloading, set the model path in your generation commands:
#### Single GPU Inference
- **Diffusion Forcing for Long Video Generation**
The <a href="https://arxiv.org/abs/2407.01392">**Diffusion Forcing**</a> version model allows us to generate Infinite-Length videos. This model supports both **text-to-video (T2V)** and **image-to-video (I2V)** tasks, and it can perform inference in both synchronous and asynchronous modes. Here we demonstrate 2 running scripts as examples for long video generation. If you want to adjust the inference parameters, e.g., the duration of video, inference mode, read the Note below first.
synchronous generation for 10s video
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# synchronous inference
python3 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 0 \
--base_num_frames 97 \
--num_frames 257 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--offload \
--teacache \
--use_ret_steps \
--teacache_thresh 0.3
```
asynchronous generation for 30s video
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# asynchronous inference
python3 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 5 \
--causal_block_size 5 \
--base_num_frames 97 \
--num_frames 737 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--offload
```
> **Note**:
> - If you want to run the **image-to-video (I2V)** task, add `--image ${image_path}` to your command and it is also better to use **text-to-video (T2V)**-like prompt which includes some descriptions of the first-frame image.
> - For long video generation, you can just switch the `--num_frames`, e.g., `--num_frames 257` for 10s video, `--num_frames 377` for 15s video, `--num_frames 737` for 30s video, `--num_frames 1457` for 60s video. The number is not strictly aligned with the logical frame number for specified time duration, but it is aligned with some training parameters, which means it may perform better. When you use asynchronous inference with causal_block_size > 1, the `--num_frames` should be carefully set.
> - You can use `--ar_step 5` to enable asynchronous inference. When asynchronous inference, `--causal_block_size 5` is recommended while it is not supposed to be set for synchronous generation. REMEMBER that the frame latent number inputted into the model in every iteration, e.g., base frame latent number (e.g., (97-1)//4+1=25 for base_num_frames=97) and (e.g., (237-97-(97-17)x1+17-1)//4+1=20 for base_num_frames=97, num_frames=237, overlap_history=17) for the last iteration, MUST be divided by causal_block_size. If you find it too hard to calculate and set proper values, just use our recommended setting above :). Asynchronous inference will take more steps to diffuse the whole sequence which means it will be SLOWER than synchronous mode. In our experiments, asynchronous inference may improve the instruction following and visual consistent performance.
> - To reduce peak VRAM, just lower the `--base_num_frames`, e.g., to 77 or 57, while keeping the same generative length `--num_frames` you want to generate. This may slightly reduce video quality, and it should not be set too small.
> - `--addnoise_condition` is used to help smooth the long video generation by adding some noise to the clean condition. Too large noise can cause the inconsistency as well. 20 is a recommended value, and you may try larger ones, but it is recommended to not exceed 50.
> - Generating a 540P video using the 1.3B model requires approximately 14.7GB peak VRAM, while the same resolution video using the 14B model demands around 51.2GB peak VRAM.
- **Text To Video & Image To Video**
```shell
# run Text-to-Video Generation
model_id=Skywork/SkyReels-V2-T2V-14B-540P
python3 generate_video.py \
--model_id ${model_id} \
--resolution 540P \
--num_frames 97 \
--guidance_scale 6.0 \
--shift 8.0 \
--fps 24 \
--prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface." \
--offload \
--teacache \
--use_ret_steps \
--teacache_thresh 0.3
```
> **Note**:
> - When using an **image-to-video (I2V)** model, you must provide an input image using the `--image ${image_path}` parameter. The `--guidance_scale 5.0` and `--shift 3.0` is recommended for I2V model.
> - Generating a 540P video using the 1.3B model requires approximately 14.7GB peak VRAM, while the same resolution video using the 14B model demands around 43.4GB peak VRAM.
- **Prompt Enhancer**
The prompt enhancer is implemented based on <a href="https://huggingface.co/Qwen/Qwen2.5-32B-Instruct">Qwen2.5-32B-Instruct</a> and is utilized via the `--prompt_enhancer` parameter. It works ideally for short prompts, while for long prompts, it might generate an excessively lengthy prompt that could lead to over-saturation in the generative video. Note the peak memory of GPU is 64G+ if you use `--prompt_enhancer`. If you want to obtain the enhanced prompt separately, you can also run the prompt_enhancer script separately for testing. The steps are as follows:
```shell
cd skyreels_v2_infer/pipelines
python3 prompt_enhancer.py --prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface."
```
> **Note**:
> - `--prompt_enhancer` is not allowed if using `--use_usp`. We recommend running the skyreels_v2_infer/pipelines/prompt_enhancer.py script first to generate enhanced prompt before enabling the `--use_usp` parameter.
**Advanced Configuration Options**
Below are the key parameters you can customize for video generation:
| Parameter | Recommended Value | Description |
|:----------------------:|:---------:|:-----------------------------------------:|
| --prompt | | Text description for generating your video |
| --image | | Path to input image for image-to-video generation |
| --resolution | 540P or 720P | Output video resolution (select based on model type) |
| --num_frames | 97 or 121 | Total frames to generate (**97 for 540P models**, **121 for 720P models**) |
| --inference_steps | 50 | Number of denoising steps |
| --fps | 24 | Frames per second in the output video |
| --shift | 8.0 or 5.0 | Flow matching scheduler parameter (**8.0 for T2V**, **5.0 for I2V**) |
| --guidance_scale | 6.0 or 5.0 | Controls text adherence strength (**6.0 for T2V**, **5.0 for I2V**) |
| --seed | | Fixed seed for reproducible results (omit for random generation) |
| --offload | True | Offloads model components to CPU to reduce VRAM usage (recommended) |
| --use_usp | True | Enables multi-GPU acceleration with xDiT USP |
| --outdir | ./video_out | Directory where generated videos will be saved |
| --prompt_enhancer | True | Expand the prompt into a more detailed description |
| --teacache | False | Enables teacache for faster inference |
| --teacache_thresh | 0.2 | Higher speedup will cause to worse quality |
| --use_ret_steps | False | Retention Steps for teacache |
**Diffusion Forcing Additional Parameters**
| Parameter | Recommended Value | Description |
|:----------------------:|:---------:|:-----------------------------------------:|
| --ar_step | 0 | Controls asynchronous inference (0 for synchronous mode) |
| --base_num_frames | 97 or 121 | Base frame count (**97 for 540P**, **121 for 720P**) |
| --overlap_history | 17 | Number of frames to overlap for smooth transitions in long videos |
| --addnoise_condition | 20 | Improves consistency in long video generation |
| --causal_block_size | 5 | Recommended when using asynchronous inference (--ar_step > 0) |
#### Multi-GPU inference using xDiT USP
We use [xDiT](https://github.com/xdit-project/xDiT) USP to accelerate inference. For example, to generate a video with 2 GPUs, you can use the following command:
- **Diffusion Forcing**
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# diffusion forcing synchronous inference
torchrun --nproc_per_node=2 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 0 \
--base_num_frames 97 \
--num_frames 257 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--use_usp \
--offload \
--seed 42
```
- **Text To Video & Image To Video**
```shell
# run Text-to-Video Generation
model_id=Skywork/SkyReels-V2-T2V-14B-540P
torchrun --nproc_per_node=2 generate_video.py \
--model_id ${model_id} \
--resolution 540P \
--num_frames 97 \
--guidance_scale 6.0 \
--shift 8.0 \
--fps 24 \
--offload \
--prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface." \
--use_usp \
--seed 42
```
> **Note**:
> - When using an **image-to-video (I2V)** model, you must provide an input image using the `--image ${image_path}` parameter. The `--guidance_scale 5.0` and `--shift 3.0` is recommended for I2V model.
## Contents
- [Abstract](#abstract)
- [Methodology of SkyReels-V2](#methodology-of-skyreels-v2)
- [Key Contributions of SkyReels-V2](#key-contributions-of-skyreels-v2)
- [Video Captioner](#video-captioner)
- [Reinforcement Learning](#reinforcement-learning)
- [Diffusion Forcing](#diffusion-forcing)
- [High-Quality Supervised Fine-Tuning(SFT)](#high-quality-supervised-fine-tuning-sft)
- [Performance](#performance)
- [Acknowledgements](#acknowledgements)
- [Citation](#citation)
---
## Abstract
Recent advances in video generation have been driven by diffusion models and autoregressive frameworks, yet critical challenges persist in harmonizing prompt adherence, visual quality, motion dynamics, and duration: compromises in motion dynamics to enhance temporal visual quality, constrained video duration (5-10 seconds) to prioritize resolution, and inadequate shot-aware generation stemming from general-purpose MLLMs' inability to interpret cinematic grammar, such as shot composition, actor expressions, and camera motions. These intertwined limitations hinder realistic long-form synthesis and professional film-style generation.
To address these limitations, we introduce SkyReels-V2, the world's first infinite-length film generative model using a Diffusion Forcing framework. Our approach synergizes Multi-modal Large Language Models (MLLM), Multi-stage Pretraining, Reinforcement Learning, and Diffusion Forcing techniques to achieve comprehensive optimization. Beyond its technical innovations, SkyReels-V2 enables multiple practical applications, including Story Generation, Image-to-Video Synthesis, Camera Director functionality, and multi-subject consistent video generation through our <a href="https://github.com/SkyworkAI/SkyReels-A2">Skyreels-A2</a> system.
## Methodology of SkyReels-V2
The SkyReels-V2 methodology consists of several interconnected components. It starts with a comprehensive data processing pipeline that prepares various quality training data. At its core is the Video Captioner architecture, which provides detailed annotations for video content. The system employs a multi-task pretraining strategy to build fundamental video generation capabilities. Post-training optimization includes Reinforcement Learning to enhance motion quality, Diffusion Forcing Training for generating extended videos, and High-quality Supervised Fine-Tuning (SFT) stages for visual refinement. The model runs on optimized computational infrastructure for efficient training and inference. SkyReels-V2 supports multiple applications, including Story Generation, Image-to-Video Synthesis, Camera Director functionality, and Elements-to-Video Generation.
<p align="center">
<img src="assets/main_pipeline.jpg" alt="mainpipeline" width="100%">
</p>
## Key Contributions of SkyReels-V2
#### Video Captioner
<a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> serves as our video captioning model for data annotation. This model is trained on the captioning result from the base model <a href="https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct">Qwen2.5-VL-72B-Instruct</a> and the sub-expert captioners on a balanced video data. The balanced video data is a carefully curated dataset of approximately 2 million videos to ensure conceptual balance and annotation quality. Built upon the <a href="https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct">Qwen2.5-VL-7B-Instruct</a> foundation model, <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> is fine-tuned to enhance performance in domain-specific video captioning tasks. To compare the performance with the SOTA models, we conducted a manual assessment of accuracy across different captioning fields using a test set of 1,000 samples. The proposed <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> achieves the highest average accuracy among the baseline models, and show a dramatic result in the shot related fields
<p align="center">
<table align="center">
<thead>
<tr>
<th>model</th>
<th><a href="https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct">Qwen2.5-VL-7B-Ins.</a></th>
<th><a href="https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct">Qwen2.5-VL-72B-Ins.</a></th>
<th><a href="https://huggingface.co/omni-research/Tarsier2-Recap-7b">Tarsier2-Recap-7b</a></th>
<th><a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</th>
</tr>
</thead>
<tbody>
<tr>
<td>Avg accuracy</td>
<td>51.4%</td>
<td>58.7%</td>
<td>49.4%</td>
<td><strong>76.3%</strong></td>
</tr>
<tr>
<td>shot type</td>
<td>76.8%</td>
<td>82.5%</td>
<td>60.2%</td>
<td><strong>93.7%</strong></td>
</tr>
<tr>
<td>shot angle</td>
<td>60.0%</td>
<td>73.7%</td>
<td>52.4%</td>
<td><strong>89.8%</strong></td>
</tr>
<tr>
<td>shot position</td>
<td>28.4%</td>
<td>32.7%</td>
<td>23.6%</td>
<td><strong>83.1%</strong></td>
</tr>
<tr>
<td>camera motion</td>
<td>62.0%</td>
<td>61.2%</td>
<td>45.3%</td>
<td><strong>85.3%</strong></td>
</tr>
<tr>
<td>expression</td>
<td>43.6%</td>
<td>51.5%</td>
<td>54.3%</td>
<td><strong>68.8%</strong></td>
</tr>
<tr>
<td colspan="5" style="text-align: center; border-bottom: 1px solid #ddd; padding: 8px;"></td>
</tr>
<tr>
<td>TYPES_type</td>
<td>43.5%</td>
<td>49.7%</td>
<td>47.6%</td>
<td><strong>82.5%</strong></td>
</tr>
<tr>
<td>TYPES_sub_type</td>
<td>38.9%</td>
<td>44.9%</td>
<td>45.9%</td>
<td><strong>75.4%</strong></td>
</tr>
<tr>
<td>appearance</td>
<td>40.9%</td>
<td>52.0%</td>
<td>45.6%</td>
<td><strong>59.3%</strong></td>
</tr>
<tr>
<td>action</td>
<td>32.4%</td>
<td>52.0%</td>
<td><strong>69.8%</strong></td>
<td>68.8%</td>
</tr>
<tr>
<td>position</td>
<td>35.4%</td>
<td>48.6%</td>
<td>45.5%</td>
<td><strong>57.5%</strong></td>
</tr>
<tr>
<td>is_main_subject</td>
<td>58.5%</td>
<td>68.7%</td>
<td>69.7%</td>
<td><strong>80.9%</strong></td>
</tr>
<tr>
<td>environment</td>
<td>70.4%</td>
<td><strong>72.7%</strong></td>
<td>61.4%</td>
<td>70.5%</td>
</tr>
<tr>
<td>lighting</td>
<td>77.1%</td>
<td><strong>80.0%</strong></td>
<td>21.2%</td>
<td>76.5%</td>
</tr>
</tbody>
</table>
</p>
#### Reinforcement Learning
Inspired by the previous success in LLM, we propose to enhance the performance of the generative model by Reinforcement Learning. Specifically, we focus on the motion quality because we find that the main drawback of our generative model is:
- the generative model does not handle well with large, deformable motions.
- the generated videos may violate the physical law.
To avoid the degradation in other metrics, such as text alignment and video quality, we ensure the preference data pairs have comparable text alignment and video quality, while only the motion quality varies. This requirement poses greater challenges in obtaining preference annotations due to the inherently higher costs of human annotation. To address this challenge, we propose a semi-automatic pipeline that strategically combines automatically generated motion pairs and human annotation results. This hybrid approach not only enhances the data scale but also improves alignment with human preferences through curated quality control. Leveraging this enhanced dataset, we first train a specialized reward model to capture the generic motion quality differences between paired samples. This learned reward function subsequently guides the sample selection process for Direct Preference Optimization (DPO), enhancing the motion quality of the generative model.
#### Diffusion Forcing
We introduce the Diffusion Forcing Transformer to unlock our model’s ability to generate long videos. Diffusion Forcing is a training and sampling strategy where each token is assigned an independent noise level. This allows tokens to be denoised according to arbitrary, per-token schedules. Conceptually, this approach functions as a form of partial masking: a token with zero noise is fully unmasked, while complete noise fully masks it. Diffusion Forcing trains the model to "unmask" any combination of variably noised tokens, using the cleaner tokens as conditional information to guide the recovery of noisy ones. Building on this, our Diffusion Forcing Transformer can extend video generation indefinitely based on the last frames of the previous segment. Note that the synchronous full sequence diffusion is a special case of Diffusion Forcing, where all tokens share the same noise level. This relationship allows us to fine-tune the Diffusion Forcing Transformer from a full-sequence diffusion model.
#### High-Quality Supervised Fine-Tuning (SFT)
We implement two sequential high-quality supervised fine-tuning (SFT) stages at 540p and 720p resolutions respectively, with the initial SFT phase conducted immediately after pretraining but prior to reinforcement learning (RL) stage.This first-stage SFT serves as a conceptual equilibrium trainer, building upon the foundation model’s pretraining outcomes that utilized only fps24 video data, while strategically removing FPS embedding components to streamline thearchitecture. Trained with the high-quality concept-balanced samples, this phase establishes optimized initialization parameters for subsequent training processes. Following this, we execute a secondary high-resolution SFT at 720p after completing the diffusion forcing stage, incorporating identical loss formulations and the higher-quality concept-balanced datasets by the manually filter. This final refinement phase focuses on resolution increase such that the overall video quality will be further enhanced.
## Performance
To comprehensively evaluate our proposed method, we construct the SkyReels-Bench for human assessment and leveraged the open-source <a href="https://github.com/Vchitect/VBench">V-Bench</a> for automated evaluation. This allows us to compare our model with the state-of-the-art (SOTA) baselines, including both open-source and proprietary models.
#### Human Evaluation
For human evaluation, we design SkyReels-Bench with 1,020 text prompts, systematically assessing three dimensions: Instruction Adherence, Motion Quality, Consistency and Visual Quality. This benchmark is designed to evaluate both text-to-video (T2V) and image-to-video (I2V) generation models, providing comprehensive assessment across different generation paradigms. To ensure fairness, all models were evaluated under default settings with consistent resolutions, and no post-generation filtering was applied.
- Text To Video Models
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model Name</th>
<th>Average</th>
<th>Instruction Adherence</th>
<th>Consistency</th>
<th>Visual Quality</th>
<th>Motion Quality</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://runwayml.com/research/introducing-gen-3-alpha">Runway-Gen3 Alpha</a></td>
<td>2.53</td>
<td>2.19</td>
<td>2.57</td>
<td>3.23</td>
<td>2.11</td>
</tr>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>2.82</td>
<td>2.64</td>
<td>2.81</td>
<td>3.20</td>
<td>2.61</td>
</tr>
<tr>
<td><a href="https://klingai.com">Kling-1.6 STD Mode</a></td>
<td>2.99</td>
<td>2.77</td>
<td>3.05</td>
<td>3.39</td>
<td><strong>2.76</strong></td>
</tr>
<tr>
<td><a href="https://hailuoai.video">Hailuo-01</a></td>
<td>3.0</td>
<td>2.8</td>
<td>3.08</td>
<td>3.29</td>
<td>2.74</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>3.12</td>
<td>2.91</td>
<td>3.31</td>
<td><strong>3.54</strong></td>
<td>2.71</td>
</tr>
<tr>
<td>SkyReels-V2</td>
<td><strong>3.14</strong></td>
<td><strong>3.15</strong></td>
<td><strong>3.35</strong></td>
<td>3.34</td>
<td>2.74</td>
</tr>
</tbody>
</table>
</p>
The evaluation demonstrates that our model achieves significant advancements in **instruction adherence (3.15)** compared to baseline methods, while maintaining competitive performance in **motion quality (2.74)** without sacrificing the **consistency (3.35)**.
- Image To Video Models
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model</th>
<th>Average</th>
<th>Instruction Adherence</th>
<th>Consistency</th>
<th>Visual Quality</th>
<th>Motion Quality</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>2.84</td>
<td>2.97</td>
<td>2.95</td>
<td>2.87</td>
<td>2.56</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>2.85</td>
<td>3.10</td>
<td>2.81</td>
<td>3.00</td>
<td>2.48</td>
</tr>
<tr>
<td><a href="https://hailuoai.video">Hailuo-01</a></td>
<td>3.05</td>
<td>3.31</td>
<td>2.58</td>
<td>3.55</td>
<td>2.74</td>
</tr>
<tr>
<td><a href="https://klingai.com">Kling-1.6 Pro Mode</a></td>
<td>3.4</td>
<td>3.56</td>
<td>3.03</td>
<td>3.58</td>
<td>3.41</td>
</tr>
<tr>
<td><a href="https://runwayml.com/research/introducing-runway-gen-4">Runway-Gen4</a></td>
<td>3.39</td>
<td>3.75</td>
<td>3.2</td>
<td>3.4</td>
<td>3.37</td>
</tr>
<tr>
<td>SkyReels-V2-DF</td>
<td>3.24</td>
<td>3.64</td>
<td>3.21</td>
<td>3.18</td>
<td>2.93</td>
</tr>
<tr>
<td>SkyReels-V2-I2V</td>
<td>3.29</td>
<td>3.42</td>
<td>3.18</td>
<td>3.56</td>
<td>3.01</td>
</tr>
</tbody>
</table>
</p>
Our results demonstrate that both **SkyReels-V2-I2V (3.29)** and **SkyReels-V2-DF (3.24)** achieve state-of-the-art performance among open-source models, significantly outperforming HunyuanVideo-13B (2.84) and Wan2.1-14B (2.85) across all quality dimensions. With an average score of 3.29, SkyReels-V2-I2V demonstrates comparable performance to proprietary models Kling-1.6 (3.4) and Runway-Gen4 (3.39).
#### VBench
To objectively compare SkyReels-V2 Model against other leading open-source Text-To-Video models, we conduct comprehensive evaluations using the public benchmark <a href="https://github.com/Vchitect/VBench">V-Bench</a>. Our evaluation specifically leverages the benchmark’s longer version prompt. For fair comparison with baseline models, we strictly follow their recommended setting for inference.
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model</th>
<th>Total Score</th>
<th>Quality Score</th>
<th>Semantic Score</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://github.com/hpcaitech/Open-Sora">OpenSora 2.0</a></td>
<td>81.5 %</td>
<td>82.1 %</td>
<td>78.2 %</td>
</tr>
<tr>
<td><a href="https://github.com/THUDM/CogVideo">CogVideoX1.5-5B</a></td>
<td>80.3 %</td>
<td>80.9 %</td>
<td>77.9 %</td>
</tr>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>82.7 %</td>
<td>84.4 %</td>
<td>76.2 %</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>83.7 %</td>
<td>84.2 %</td>
<td><strong>81.4 %</strong></td>
</tr>
<tr>
<td>SkyReels-V2</td>
<td><strong>83.9 %</strong></td>
<td><strong>84.7 %</strong></td>
<td>80.8 %</td>
</tr>
</tbody>
</table>
</p>
The VBench results demonstrate that SkyReels-V2 outperforms all compared models including HunyuanVideo-13B and Wan2.1-14B, With the highest **total score (83.9%)** and **quality score (84.7%)**. In this evaluation, the semantic score is slightly lower than Wan2.1-14B, while we outperform Wan2.1-14B in human evaluations, with the primary gap attributed to V-Bench’s insufficient evaluation of shot-scenario semantic adherence.
## Acknowledgements
We would like to thank the contributors of <a href="https://github.com/Wan-Video/Wan2.1">Wan 2.1</a>, <a href="https://github.com/xdit-project/xDiT">XDit</a> and <a href="https://qwenlm.github.io/blog/qwen2.5/">Qwen 2.5</a> repositories, for their open research and contributions.
## Citation
```bibtex
@misc{chen2025skyreelsv2infinitelengthfilmgenerative,
title={SkyReels-V2: Infinite-length Film Generative Model},
author={Guibin Chen and Dixuan Lin and Jiangping Yang and Chunze Lin and Junchen Zhu and Mingyuan Fan and Hao Zhang and Sheng Chen and Zheng Chen and Chengcheng Ma and Weiming Xiong and Wei Wang and Nuo Pang and Kang Kang and Zhiheng Xu and Yuzhe Jin and Yupeng Liang and Yubing Song and Peng Zhao and Boyuan Xu and Di Qiu and Debang Li and Zhengcong Fei and Yang Li and Yahui Zhou},
year={2025},
eprint={2504.13074},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.13074},
}
```
|
thanhtantran/Qwen2-VL-2B-RKLLM
|
thanhtantran
| 2025-06-14T07:51:32Z | 0 | 0 | null |
[
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-06-14T07:11:47Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen2-VL-2B-Instruct
---
Using with
https://github.com/thanhtantran/Qwen2-VL-Streamlit-App-with-RKLLM
|
Skywork/SkyReels-V2-T2V-14B-540P-Diffusers
|
Skywork
| 2025-06-14T07:50:14Z | 0 | 0 | null |
[
"text-to-video",
"arxiv:2504.13074",
"arxiv:2407.01392",
"license:other",
"region:us"
] |
text-to-video
| 2025-06-14T07:50:01Z |
---
license: other
license_name: skywork-license
license_link: LICENSE
pipeline_tag: text-to-video
---
<p align="center">
<img src="assets/logo2.png" alt="SkyReels Logo" width="50%">
</p>
<h1 align="center">SkyReels V2: Infinite-Length Film Generative Model</h1>
<p align="center">
📑 <a href="https://arxiv.org/pdf/2504.13074">Technical Report</a> · 👋 <a href="https://www.skyreels.ai/home?utm_campaign=huggingface_skyreels_v2" target="_blank">Playground</a> · 💬 <a href="https://discord.gg/PwM6NYtccQ" target="_blank">Discord</a> · 🤗 <a href="https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9" target="_blank">Hugging Face</a> · 🤖 <a href="https://www.modelscope.cn/collections/SkyReels-V2-f665650130b144" target="_blank">ModelScope</a> · 🌐 <a href="https://github.com/SkyworkAI/SkyReels-V2" target="_blank">GitHub</a>
</p>
---
Welcome to the **SkyReels V2** repository! Here, you'll find the model weights for our infinite-length film generative models. To the best of our knowledge, it represents the first open-source video generative model employing **AutoRegressive Diffusion-Forcing architecture** that achieves the **SOTA performance** among publicly available models.
## 🔥🔥🔥 News!!
* Apr 24, 2025: 🔥 We release the 720P models, [SkyReels-V2-DF-14B-720P](https://huggingface.co/Skywork/SkyReels-V2-DF-14B-720P) and [SkyReels-V2-I2V-14B-720P](https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-720P). The former facilitates infinite-length autoregressive video generation, and the latter focuses on Image2Video synthesis.
* Apr 21, 2025: 👋 We release the inference code and model weights of [SkyReels-V2](https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9) Series Models and the video captioning model [SkyCaptioner-V1](https://huggingface.co/Skywork/SkyCaptioner-V1) .
* Apr 3, 2025: 🔥 We also release [SkyReels-A2](https://github.com/SkyworkAI/SkyReels-A2). This is an open-sourced controllable video generation framework capable of assembling arbitrary visual elements.
* Feb 18, 2025: 🔥 we released [SkyReels-A1](https://github.com/SkyworkAI/SkyReels-A1). This is an open-sourced and effective framework for portrait image animation.
* Feb 18, 2025: 🔥 We released [SkyReels-V1](https://github.com/SkyworkAI/SkyReels-V1). This is the first and most advanced open-source human-centric video foundation model.
## 🎥 Demos
<table>
<tr>
<td align="center">
<video src="https://github.com/user-attachments/assets/f6f9f9a7-5d5f-433c-9d73-d8d593b7ad25" width="100%"></video>
</td>
<td align="center">
<video src="https://github.com/user-attachments/assets/0eb13415-f4d9-4aaf-bcd3-3031851109b9" width="100%"></video>
</td>
<td align="center">
<video src="https://github.com/user-attachments/assets/dcd16603-5bf4-4786-8e4d-1ed23889d07a" width="100%"></video>
</td>
</tr>
</table>
The demos above showcase 30-second videos generated using our SkyReels-V2 Diffusion Forcing model.
## 📑 TODO List
- [x] <a href="https://arxiv.org/pdf/2504.13074">Technical Report</a>
- [x] Checkpoints of the 14B and 1.3B Models Series
- [x] Single-GPU & Multi-GPU Inference Code
- [x] <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a>: A Video Captioning Model
- [x] Prompt Enhancer
- [ ] Diffusers integration
- [ ] Checkpoints of the 5B Models Series
- [ ] Checkpoints of the Camera Director Models
- [ ] Checkpoints of the Step & Guidance Distill Model
## 🚀 Quickstart
#### Installation
```shell
# clone the repository.
git clone https://github.com/SkyworkAI/SkyReels-V2
cd SkyReels-V2
# Install dependencies. Test environment uses Python 3.10.12.
pip install -r requirements.txt
```
#### Model Download
You can download our models from Hugging Face:
<table>
<thead>
<tr>
<th>Type</th>
<th>Model Variant</th>
<th>Recommended Height/Width/Frame</th>
<th>Link</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="5">Diffusion Forcing</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-1.3B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-1.3B-540P">ModelScope</a></td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="5">Text-to-Video</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-T2V-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-T2V-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-T2V-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-T2V-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="5">Image-to-Video</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-1.3B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-1.3B-540P">ModelScope</a></td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="3">Camera Director</td>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
</tbody>
</table>
After downloading, set the model path in your generation commands:
#### Single GPU Inference
- **Diffusion Forcing for Long Video Generation**
The <a href="https://arxiv.org/abs/2407.01392">**Diffusion Forcing**</a> version model allows us to generate Infinite-Length videos. This model supports both **text-to-video (T2V)** and **image-to-video (I2V)** tasks, and it can perform inference in both synchronous and asynchronous modes. Here we demonstrate 2 running scripts as examples for long video generation. If you want to adjust the inference parameters, e.g., the duration of video, inference mode, read the Note below first.
synchronous generation for 10s video
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# synchronous inference
python3 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 0 \
--base_num_frames 97 \
--num_frames 257 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--offload \
--teacache \
--use_ret_steps \
--teacache_thresh 0.3
```
asynchronous generation for 30s video
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# asynchronous inference
python3 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 5 \
--causal_block_size 5 \
--base_num_frames 97 \
--num_frames 737 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--offload
```
> **Note**:
> - If you want to run the **image-to-video (I2V)** task, add `--image ${image_path}` to your command and it is also better to use **text-to-video (T2V)**-like prompt which includes some descriptions of the first-frame image.
> - For long video generation, you can just switch the `--num_frames`, e.g., `--num_frames 257` for 10s video, `--num_frames 377` for 15s video, `--num_frames 737` for 30s video, `--num_frames 1457` for 60s video. The number is not strictly aligned with the logical frame number for specified time duration, but it is aligned with some training parameters, which means it may perform better. When you use asynchronous inference with causal_block_size > 1, the `--num_frames` should be carefully set.
> - You can use `--ar_step 5` to enable asynchronous inference. When asynchronous inference, `--causal_block_size 5` is recommended while it is not supposed to be set for synchronous generation. REMEMBER that the frame latent number inputted into the model in every iteration, e.g., base frame latent number (e.g., (97-1)//4+1=25 for base_num_frames=97) and (e.g., (237-97-(97-17)x1+17-1)//4+1=20 for base_num_frames=97, num_frames=237, overlap_history=17) for the last iteration, MUST be divided by causal_block_size. If you find it too hard to calculate and set proper values, just use our recommended setting above :). Asynchronous inference will take more steps to diffuse the whole sequence which means it will be SLOWER than synchronous mode. In our experiments, asynchronous inference may improve the instruction following and visual consistent performance.
> - To reduce peak VRAM, just lower the `--base_num_frames`, e.g., to 77 or 57, while keeping the same generative length `--num_frames` you want to generate. This may slightly reduce video quality, and it should not be set too small.
> - `--addnoise_condition` is used to help smooth the long video generation by adding some noise to the clean condition. Too large noise can cause the inconsistency as well. 20 is a recommended value, and you may try larger ones, but it is recommended to not exceed 50.
> - Generating a 540P video using the 1.3B model requires approximately 14.7GB peak VRAM, while the same resolution video using the 14B model demands around 51.2GB peak VRAM.
- **Text To Video & Image To Video**
```shell
# run Text-to-Video Generation
model_id=Skywork/SkyReels-V2-T2V-14B-540P
python3 generate_video.py \
--model_id ${model_id} \
--resolution 540P \
--num_frames 97 \
--guidance_scale 6.0 \
--shift 8.0 \
--fps 24 \
--prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface." \
--offload \
--teacache \
--use_ret_steps \
--teacache_thresh 0.3
```
> **Note**:
> - When using an **image-to-video (I2V)** model, you must provide an input image using the `--image ${image_path}` parameter. The `--guidance_scale 5.0` and `--shift 3.0` is recommended for I2V model.
> - Generating a 540P video using the 1.3B model requires approximately 14.7GB peak VRAM, while the same resolution video using the 14B model demands around 43.4GB peak VRAM.
- **Prompt Enhancer**
The prompt enhancer is implemented based on <a href="https://huggingface.co/Qwen/Qwen2.5-32B-Instruct">Qwen2.5-32B-Instruct</a> and is utilized via the `--prompt_enhancer` parameter. It works ideally for short prompts, while for long prompts, it might generate an excessively lengthy prompt that could lead to over-saturation in the generative video. Note the peak memory of GPU is 64G+ if you use `--prompt_enhancer`. If you want to obtain the enhanced prompt separately, you can also run the prompt_enhancer script separately for testing. The steps are as follows:
```shell
cd skyreels_v2_infer/pipelines
python3 prompt_enhancer.py --prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface."
```
> **Note**:
> - `--prompt_enhancer` is not allowed if using `--use_usp`. We recommend running the skyreels_v2_infer/pipelines/prompt_enhancer.py script first to generate enhanced prompt before enabling the `--use_usp` parameter.
**Advanced Configuration Options**
Below are the key parameters you can customize for video generation:
| Parameter | Recommended Value | Description |
|:----------------------:|:---------:|:-----------------------------------------:|
| --prompt | | Text description for generating your video |
| --image | | Path to input image for image-to-video generation |
| --resolution | 540P or 720P | Output video resolution (select based on model type) |
| --num_frames | 97 or 121 | Total frames to generate (**97 for 540P models**, **121 for 720P models**) |
| --inference_steps | 50 | Number of denoising steps |
| --fps | 24 | Frames per second in the output video |
| --shift | 8.0 or 5.0 | Flow matching scheduler parameter (**8.0 for T2V**, **5.0 for I2V**) |
| --guidance_scale | 6.0 or 5.0 | Controls text adherence strength (**6.0 for T2V**, **5.0 for I2V**) |
| --seed | | Fixed seed for reproducible results (omit for random generation) |
| --offload | True | Offloads model components to CPU to reduce VRAM usage (recommended) |
| --use_usp | True | Enables multi-GPU acceleration with xDiT USP |
| --outdir | ./video_out | Directory where generated videos will be saved |
| --prompt_enhancer | True | Expand the prompt into a more detailed description |
| --teacache | False | Enables teacache for faster inference |
| --teacache_thresh | 0.2 | Higher speedup will cause to worse quality |
| --use_ret_steps | False | Retention Steps for teacache |
**Diffusion Forcing Additional Parameters**
| Parameter | Recommended Value | Description |
|:----------------------:|:---------:|:-----------------------------------------:|
| --ar_step | 0 | Controls asynchronous inference (0 for synchronous mode) |
| --base_num_frames | 97 or 121 | Base frame count (**97 for 540P**, **121 for 720P**) |
| --overlap_history | 17 | Number of frames to overlap for smooth transitions in long videos |
| --addnoise_condition | 20 | Improves consistency in long video generation |
| --causal_block_size | 5 | Recommended when using asynchronous inference (--ar_step > 0) |
#### Multi-GPU inference using xDiT USP
We use [xDiT](https://github.com/xdit-project/xDiT) USP to accelerate inference. For example, to generate a video with 2 GPUs, you can use the following command:
- **Diffusion Forcing**
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# diffusion forcing synchronous inference
torchrun --nproc_per_node=2 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 0 \
--base_num_frames 97 \
--num_frames 257 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--use_usp \
--offload \
--seed 42
```
- **Text To Video & Image To Video**
```shell
# run Text-to-Video Generation
model_id=Skywork/SkyReels-V2-T2V-14B-540P
torchrun --nproc_per_node=2 generate_video.py \
--model_id ${model_id} \
--resolution 540P \
--num_frames 97 \
--guidance_scale 6.0 \
--shift 8.0 \
--fps 24 \
--offload \
--prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface." \
--use_usp \
--seed 42
```
> **Note**:
> - When using an **image-to-video (I2V)** model, you must provide an input image using the `--image ${image_path}` parameter. The `--guidance_scale 5.0` and `--shift 3.0` is recommended for I2V model.
## Contents
- [Abstract](#abstract)
- [Methodology of SkyReels-V2](#methodology-of-skyreels-v2)
- [Key Contributions of SkyReels-V2](#key-contributions-of-skyreels-v2)
- [Video Captioner](#video-captioner)
- [Reinforcement Learning](#reinforcement-learning)
- [Diffusion Forcing](#diffusion-forcing)
- [High-Quality Supervised Fine-Tuning(SFT)](#high-quality-supervised-fine-tuning-sft)
- [Performance](#performance)
- [Acknowledgements](#acknowledgements)
- [Citation](#citation)
---
## Abstract
Recent advances in video generation have been driven by diffusion models and autoregressive frameworks, yet critical challenges persist in harmonizing prompt adherence, visual quality, motion dynamics, and duration: compromises in motion dynamics to enhance temporal visual quality, constrained video duration (5-10 seconds) to prioritize resolution, and inadequate shot-aware generation stemming from general-purpose MLLMs' inability to interpret cinematic grammar, such as shot composition, actor expressions, and camera motions. These intertwined limitations hinder realistic long-form synthesis and professional film-style generation.
To address these limitations, we introduce SkyReels-V2, the world's first infinite-length film generative model using a Diffusion Forcing framework. Our approach synergizes Multi-modal Large Language Models (MLLM), Multi-stage Pretraining, Reinforcement Learning, and Diffusion Forcing techniques to achieve comprehensive optimization. Beyond its technical innovations, SkyReels-V2 enables multiple practical applications, including Story Generation, Image-to-Video Synthesis, Camera Director functionality, and multi-subject consistent video generation through our <a href="https://github.com/SkyworkAI/SkyReels-A2">Skyreels-A2</a> system.
## Methodology of SkyReels-V2
The SkyReels-V2 methodology consists of several interconnected components. It starts with a comprehensive data processing pipeline that prepares various quality training data. At its core is the Video Captioner architecture, which provides detailed annotations for video content. The system employs a multi-task pretraining strategy to build fundamental video generation capabilities. Post-training optimization includes Reinforcement Learning to enhance motion quality, Diffusion Forcing Training for generating extended videos, and High-quality Supervised Fine-Tuning (SFT) stages for visual refinement. The model runs on optimized computational infrastructure for efficient training and inference. SkyReels-V2 supports multiple applications, including Story Generation, Image-to-Video Synthesis, Camera Director functionality, and Elements-to-Video Generation.
<p align="center">
<img src="assets/main_pipeline.jpg" alt="mainpipeline" width="100%">
</p>
## Key Contributions of SkyReels-V2
#### Video Captioner
<a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> serves as our video captioning model for data annotation. This model is trained on the captioning result from the base model <a href="https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct">Qwen2.5-VL-72B-Instruct</a> and the sub-expert captioners on a balanced video data. The balanced video data is a carefully curated dataset of approximately 2 million videos to ensure conceptual balance and annotation quality. Built upon the <a href="https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct">Qwen2.5-VL-7B-Instruct</a> foundation model, <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> is fine-tuned to enhance performance in domain-specific video captioning tasks. To compare the performance with the SOTA models, we conducted a manual assessment of accuracy across different captioning fields using a test set of 1,000 samples. The proposed <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> achieves the highest average accuracy among the baseline models, and show a dramatic result in the shot related fields
<p align="center">
<table align="center">
<thead>
<tr>
<th>model</th>
<th><a href="https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct">Qwen2.5-VL-7B-Ins.</a></th>
<th><a href="https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct">Qwen2.5-VL-72B-Ins.</a></th>
<th><a href="https://huggingface.co/omni-research/Tarsier2-Recap-7b">Tarsier2-Recap-7b</a></th>
<th><a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</th>
</tr>
</thead>
<tbody>
<tr>
<td>Avg accuracy</td>
<td>51.4%</td>
<td>58.7%</td>
<td>49.4%</td>
<td><strong>76.3%</strong></td>
</tr>
<tr>
<td>shot type</td>
<td>76.8%</td>
<td>82.5%</td>
<td>60.2%</td>
<td><strong>93.7%</strong></td>
</tr>
<tr>
<td>shot angle</td>
<td>60.0%</td>
<td>73.7%</td>
<td>52.4%</td>
<td><strong>89.8%</strong></td>
</tr>
<tr>
<td>shot position</td>
<td>28.4%</td>
<td>32.7%</td>
<td>23.6%</td>
<td><strong>83.1%</strong></td>
</tr>
<tr>
<td>camera motion</td>
<td>62.0%</td>
<td>61.2%</td>
<td>45.3%</td>
<td><strong>85.3%</strong></td>
</tr>
<tr>
<td>expression</td>
<td>43.6%</td>
<td>51.5%</td>
<td>54.3%</td>
<td><strong>68.8%</strong></td>
</tr>
<tr>
<td colspan="5" style="text-align: center; border-bottom: 1px solid #ddd; padding: 8px;"></td>
</tr>
<tr>
<td>TYPES_type</td>
<td>43.5%</td>
<td>49.7%</td>
<td>47.6%</td>
<td><strong>82.5%</strong></td>
</tr>
<tr>
<td>TYPES_sub_type</td>
<td>38.9%</td>
<td>44.9%</td>
<td>45.9%</td>
<td><strong>75.4%</strong></td>
</tr>
<tr>
<td>appearance</td>
<td>40.9%</td>
<td>52.0%</td>
<td>45.6%</td>
<td><strong>59.3%</strong></td>
</tr>
<tr>
<td>action</td>
<td>32.4%</td>
<td>52.0%</td>
<td><strong>69.8%</strong></td>
<td>68.8%</td>
</tr>
<tr>
<td>position</td>
<td>35.4%</td>
<td>48.6%</td>
<td>45.5%</td>
<td><strong>57.5%</strong></td>
</tr>
<tr>
<td>is_main_subject</td>
<td>58.5%</td>
<td>68.7%</td>
<td>69.7%</td>
<td><strong>80.9%</strong></td>
</tr>
<tr>
<td>environment</td>
<td>70.4%</td>
<td><strong>72.7%</strong></td>
<td>61.4%</td>
<td>70.5%</td>
</tr>
<tr>
<td>lighting</td>
<td>77.1%</td>
<td><strong>80.0%</strong></td>
<td>21.2%</td>
<td>76.5%</td>
</tr>
</tbody>
</table>
</p>
#### Reinforcement Learning
Inspired by the previous success in LLM, we propose to enhance the performance of the generative model by Reinforcement Learning. Specifically, we focus on the motion quality because we find that the main drawback of our generative model is:
- the generative model does not handle well with large, deformable motions.
- the generated videos may violate the physical law.
To avoid the degradation in other metrics, such as text alignment and video quality, we ensure the preference data pairs have comparable text alignment and video quality, while only the motion quality varies. This requirement poses greater challenges in obtaining preference annotations due to the inherently higher costs of human annotation. To address this challenge, we propose a semi-automatic pipeline that strategically combines automatically generated motion pairs and human annotation results. This hybrid approach not only enhances the data scale but also improves alignment with human preferences through curated quality control. Leveraging this enhanced dataset, we first train a specialized reward model to capture the generic motion quality differences between paired samples. This learned reward function subsequently guides the sample selection process for Direct Preference Optimization (DPO), enhancing the motion quality of the generative model.
#### Diffusion Forcing
We introduce the Diffusion Forcing Transformer to unlock our model’s ability to generate long videos. Diffusion Forcing is a training and sampling strategy where each token is assigned an independent noise level. This allows tokens to be denoised according to arbitrary, per-token schedules. Conceptually, this approach functions as a form of partial masking: a token with zero noise is fully unmasked, while complete noise fully masks it. Diffusion Forcing trains the model to "unmask" any combination of variably noised tokens, using the cleaner tokens as conditional information to guide the recovery of noisy ones. Building on this, our Diffusion Forcing Transformer can extend video generation indefinitely based on the last frames of the previous segment. Note that the synchronous full sequence diffusion is a special case of Diffusion Forcing, where all tokens share the same noise level. This relationship allows us to fine-tune the Diffusion Forcing Transformer from a full-sequence diffusion model.
#### High-Quality Supervised Fine-Tuning (SFT)
We implement two sequential high-quality supervised fine-tuning (SFT) stages at 540p and 720p resolutions respectively, with the initial SFT phase conducted immediately after pretraining but prior to reinforcement learning (RL) stage.This first-stage SFT serves as a conceptual equilibrium trainer, building upon the foundation model’s pretraining outcomes that utilized only fps24 video data, while strategically removing FPS embedding components to streamline thearchitecture. Trained with the high-quality concept-balanced samples, this phase establishes optimized initialization parameters for subsequent training processes. Following this, we execute a secondary high-resolution SFT at 720p after completing the diffusion forcing stage, incorporating identical loss formulations and the higher-quality concept-balanced datasets by the manually filter. This final refinement phase focuses on resolution increase such that the overall video quality will be further enhanced.
## Performance
To comprehensively evaluate our proposed method, we construct the SkyReels-Bench for human assessment and leveraged the open-source <a href="https://github.com/Vchitect/VBench">V-Bench</a> for automated evaluation. This allows us to compare our model with the state-of-the-art (SOTA) baselines, including both open-source and proprietary models.
#### Human Evaluation
For human evaluation, we design SkyReels-Bench with 1,020 text prompts, systematically assessing three dimensions: Instruction Adherence, Motion Quality, Consistency and Visual Quality. This benchmark is designed to evaluate both text-to-video (T2V) and image-to-video (I2V) generation models, providing comprehensive assessment across different generation paradigms. To ensure fairness, all models were evaluated under default settings with consistent resolutions, and no post-generation filtering was applied.
- Text To Video Models
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model Name</th>
<th>Average</th>
<th>Instruction Adherence</th>
<th>Consistency</th>
<th>Visual Quality</th>
<th>Motion Quality</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://runwayml.com/research/introducing-gen-3-alpha">Runway-Gen3 Alpha</a></td>
<td>2.53</td>
<td>2.19</td>
<td>2.57</td>
<td>3.23</td>
<td>2.11</td>
</tr>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>2.82</td>
<td>2.64</td>
<td>2.81</td>
<td>3.20</td>
<td>2.61</td>
</tr>
<tr>
<td><a href="https://klingai.com">Kling-1.6 STD Mode</a></td>
<td>2.99</td>
<td>2.77</td>
<td>3.05</td>
<td>3.39</td>
<td><strong>2.76</strong></td>
</tr>
<tr>
<td><a href="https://hailuoai.video">Hailuo-01</a></td>
<td>3.0</td>
<td>2.8</td>
<td>3.08</td>
<td>3.29</td>
<td>2.74</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>3.12</td>
<td>2.91</td>
<td>3.31</td>
<td><strong>3.54</strong></td>
<td>2.71</td>
</tr>
<tr>
<td>SkyReels-V2</td>
<td><strong>3.14</strong></td>
<td><strong>3.15</strong></td>
<td><strong>3.35</strong></td>
<td>3.34</td>
<td>2.74</td>
</tr>
</tbody>
</table>
</p>
The evaluation demonstrates that our model achieves significant advancements in **instruction adherence (3.15)** compared to baseline methods, while maintaining competitive performance in **motion quality (2.74)** without sacrificing the **consistency (3.35)**.
- Image To Video Models
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model</th>
<th>Average</th>
<th>Instruction Adherence</th>
<th>Consistency</th>
<th>Visual Quality</th>
<th>Motion Quality</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>2.84</td>
<td>2.97</td>
<td>2.95</td>
<td>2.87</td>
<td>2.56</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>2.85</td>
<td>3.10</td>
<td>2.81</td>
<td>3.00</td>
<td>2.48</td>
</tr>
<tr>
<td><a href="https://hailuoai.video">Hailuo-01</a></td>
<td>3.05</td>
<td>3.31</td>
<td>2.58</td>
<td>3.55</td>
<td>2.74</td>
</tr>
<tr>
<td><a href="https://klingai.com">Kling-1.6 Pro Mode</a></td>
<td>3.4</td>
<td>3.56</td>
<td>3.03</td>
<td>3.58</td>
<td>3.41</td>
</tr>
<tr>
<td><a href="https://runwayml.com/research/introducing-runway-gen-4">Runway-Gen4</a></td>
<td>3.39</td>
<td>3.75</td>
<td>3.2</td>
<td>3.4</td>
<td>3.37</td>
</tr>
<tr>
<td>SkyReels-V2-DF</td>
<td>3.24</td>
<td>3.64</td>
<td>3.21</td>
<td>3.18</td>
<td>2.93</td>
</tr>
<tr>
<td>SkyReels-V2-I2V</td>
<td>3.29</td>
<td>3.42</td>
<td>3.18</td>
<td>3.56</td>
<td>3.01</td>
</tr>
</tbody>
</table>
</p>
Our results demonstrate that both **SkyReels-V2-I2V (3.29)** and **SkyReels-V2-DF (3.24)** achieve state-of-the-art performance among open-source models, significantly outperforming HunyuanVideo-13B (2.84) and Wan2.1-14B (2.85) across all quality dimensions. With an average score of 3.29, SkyReels-V2-I2V demonstrates comparable performance to proprietary models Kling-1.6 (3.4) and Runway-Gen4 (3.39).
#### VBench
To objectively compare SkyReels-V2 Model against other leading open-source Text-To-Video models, we conduct comprehensive evaluations using the public benchmark <a href="https://github.com/Vchitect/VBench">V-Bench</a>. Our evaluation specifically leverages the benchmark’s longer version prompt. For fair comparison with baseline models, we strictly follow their recommended setting for inference.
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model</th>
<th>Total Score</th>
<th>Quality Score</th>
<th>Semantic Score</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://github.com/hpcaitech/Open-Sora">OpenSora 2.0</a></td>
<td>81.5 %</td>
<td>82.1 %</td>
<td>78.2 %</td>
</tr>
<tr>
<td><a href="https://github.com/THUDM/CogVideo">CogVideoX1.5-5B</a></td>
<td>80.3 %</td>
<td>80.9 %</td>
<td>77.9 %</td>
</tr>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>82.7 %</td>
<td>84.4 %</td>
<td>76.2 %</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>83.7 %</td>
<td>84.2 %</td>
<td><strong>81.4 %</strong></td>
</tr>
<tr>
<td>SkyReels-V2</td>
<td><strong>83.9 %</strong></td>
<td><strong>84.7 %</strong></td>
<td>80.8 %</td>
</tr>
</tbody>
</table>
</p>
The VBench results demonstrate that SkyReels-V2 outperforms all compared models including HunyuanVideo-13B and Wan2.1-14B, With the highest **total score (83.9%)** and **quality score (84.7%)**. In this evaluation, the semantic score is slightly lower than Wan2.1-14B, while we outperform Wan2.1-14B in human evaluations, with the primary gap attributed to V-Bench’s insufficient evaluation of shot-scenario semantic adherence.
## Acknowledgements
We would like to thank the contributors of <a href="https://github.com/Wan-Video/Wan2.1">Wan 2.1</a>, <a href="https://github.com/xdit-project/xDiT">XDit</a> and <a href="https://qwenlm.github.io/blog/qwen2.5/">Qwen 2.5</a> repositories, for their open research and contributions.
## Citation
```bibtex
@misc{chen2025skyreelsv2infinitelengthfilmgenerative,
title={SkyReels-V2: Infinite-length Film Generative Model},
author={Guibin Chen and Dixuan Lin and Jiangping Yang and Chunze Lin and Junchen Zhu and Mingyuan Fan and Hao Zhang and Sheng Chen and Zheng Chen and Chengcheng Ma and Weiming Xiong and Wei Wang and Nuo Pang and Kang Kang and Zhiheng Xu and Yuzhe Jin and Yupeng Liang and Yubing Song and Peng Zhao and Boyuan Xu and Di Qiu and Debang Li and Zhengcong Fei and Yang Li and Yahui Zhou},
year={2025},
eprint={2504.13074},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.13074},
}
```
|
Skywork/SkyReels-V2-DF-14B-540P-Diffusers
|
Skywork
| 2025-06-14T07:46:28Z | 0 | 1 | null |
[
"text-to-video",
"arxiv:2504.13074",
"arxiv:2407.01392",
"license:other",
"region:us"
] |
text-to-video
| 2025-06-14T07:45:59Z |
---
license: other
license_name: skywork-license
license_link: LICENSE
pipeline_tag: text-to-video
---
<p align="center">
<img src="assets/logo2.png" alt="SkyReels Logo" width="50%">
</p>
<h1 align="center">SkyReels V2: Infinite-Length Film Generative Model</h1>
<p align="center">
📑 <a href="https://arxiv.org/pdf/2504.13074">Technical Report</a> · 👋 <a href="https://www.skyreels.ai/home?utm_campaign=huggingface_skyreels_v2" target="_blank">Playground</a> · 💬 <a href="https://discord.gg/PwM6NYtccQ" target="_blank">Discord</a> · 🤗 <a href="https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9" target="_blank">Hugging Face</a> · 🤖 <a href="https://www.modelscope.cn/collections/SkyReels-V2-f665650130b144" target="_blank">ModelScope</a> · 🌐 <a href="https://github.com/SkyworkAI/SkyReels-V2" target="_blank">GitHub</a>
</p>
---
Welcome to the **SkyReels V2** repository! Here, you'll find the model weights for our infinite-length film generative models. To the best of our knowledge, it represents the first open-source video generative model employing **AutoRegressive Diffusion-Forcing architecture** that achieves the **SOTA performance** among publicly available models.
## 🔥🔥🔥 News!!
* Apr 24, 2025: 🔥 We release the 720P models, [SkyReels-V2-DF-14B-720P](https://huggingface.co/Skywork/SkyReels-V2-DF-14B-720P) and [SkyReels-V2-I2V-14B-720P](https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-720P). The former facilitates infinite-length autoregressive video generation, and the latter focuses on Image2Video synthesis.
* Apr 21, 2025: 👋 We release the inference code and model weights of [SkyReels-V2](https://huggingface.co/collections/Skywork/skyreels-v2-6801b1b93df627d441d0d0d9) Series Models and the video captioning model [SkyCaptioner-V1](https://huggingface.co/Skywork/SkyCaptioner-V1) .
* Apr 3, 2025: 🔥 We also release [SkyReels-A2](https://github.com/SkyworkAI/SkyReels-A2). This is an open-sourced controllable video generation framework capable of assembling arbitrary visual elements.
* Feb 18, 2025: 🔥 we released [SkyReels-A1](https://github.com/SkyworkAI/SkyReels-A1). This is an open-sourced and effective framework for portrait image animation.
* Feb 18, 2025: 🔥 We released [SkyReels-V1](https://github.com/SkyworkAI/SkyReels-V1). This is the first and most advanced open-source human-centric video foundation model.
## 🎥 Demos
<table>
<tr>
<td align="center">
<video src="https://github.com/user-attachments/assets/f6f9f9a7-5d5f-433c-9d73-d8d593b7ad25" width="100%"></video>
</td>
<td align="center">
<video src="https://github.com/user-attachments/assets/0eb13415-f4d9-4aaf-bcd3-3031851109b9" width="100%"></video>
</td>
<td align="center">
<video src="https://github.com/user-attachments/assets/dcd16603-5bf4-4786-8e4d-1ed23889d07a" width="100%"></video>
</td>
</tr>
</table>
The demos above showcase 30-second videos generated using our SkyReels-V2 Diffusion Forcing model.
## 📑 TODO List
- [x] <a href="https://arxiv.org/pdf/2504.13074">Technical Report</a>
- [x] Checkpoints of the 14B and 1.3B Models Series
- [x] Single-GPU & Multi-GPU Inference Code
- [x] <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a>: A Video Captioning Model
- [x] Prompt Enhancer
- [ ] Diffusers integration
- [ ] Checkpoints of the 5B Models Series
- [ ] Checkpoints of the Camera Director Models
- [ ] Checkpoints of the Step & Guidance Distill Model
## 🚀 Quickstart
#### Installation
```shell
# clone the repository.
git clone https://github.com/SkyworkAI/SkyReels-V2
cd SkyReels-V2
# Install dependencies. Test environment uses Python 3.10.12.
pip install -r requirements.txt
```
#### Model Download
You can download our models from Hugging Face:
<table>
<thead>
<tr>
<th>Type</th>
<th>Model Variant</th>
<th>Recommended Height/Width/Frame</th>
<th>Link</th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="5">Diffusion Forcing</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-1.3B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-1.3B-540P">ModelScope</a></td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-DF-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-DF-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="5">Text-to-Video</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-T2V-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-T2V-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-T2V-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-T2V-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="5">Image-to-Video</td>
<td>1.3B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-1.3B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-1.3B-540P">ModelScope</a></td>
</tr>
<tr>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-540P</td>
<td>544 * 960 * 97f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-540P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-14B-540P">ModelScope</a></td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>🤗 <a href="https://huggingface.co/Skywork/SkyReels-V2-I2V-14B-720P">Huggingface</a> 🤖 <a href="https://www.modelscope.cn/models/Skywork/SkyReels-V2-I2V-14B-720P">ModelScope</a></td>
</tr>
<tr>
<td rowspan="3">Camera Director</td>
<td>5B-540P</td>
<td>544 * 960 * 97f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>5B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
<tr>
<td>14B-720P</td>
<td>720 * 1280 * 121f</td>
<td>Coming Soon</td>
</tr>
</tbody>
</table>
After downloading, set the model path in your generation commands:
#### Single GPU Inference
- **Diffusion Forcing for Long Video Generation**
The <a href="https://arxiv.org/abs/2407.01392">**Diffusion Forcing**</a> version model allows us to generate Infinite-Length videos. This model supports both **text-to-video (T2V)** and **image-to-video (I2V)** tasks, and it can perform inference in both synchronous and asynchronous modes. Here we demonstrate 2 running scripts as examples for long video generation. If you want to adjust the inference parameters, e.g., the duration of video, inference mode, read the Note below first.
synchronous generation for 10s video
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# synchronous inference
python3 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 0 \
--base_num_frames 97 \
--num_frames 257 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--offload \
--teacache \
--use_ret_steps \
--teacache_thresh 0.3
```
asynchronous generation for 30s video
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# asynchronous inference
python3 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 5 \
--causal_block_size 5 \
--base_num_frames 97 \
--num_frames 737 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--offload
```
> **Note**:
> - If you want to run the **image-to-video (I2V)** task, add `--image ${image_path}` to your command and it is also better to use **text-to-video (T2V)**-like prompt which includes some descriptions of the first-frame image.
> - For long video generation, you can just switch the `--num_frames`, e.g., `--num_frames 257` for 10s video, `--num_frames 377` for 15s video, `--num_frames 737` for 30s video, `--num_frames 1457` for 60s video. The number is not strictly aligned with the logical frame number for specified time duration, but it is aligned with some training parameters, which means it may perform better. When you use asynchronous inference with causal_block_size > 1, the `--num_frames` should be carefully set.
> - You can use `--ar_step 5` to enable asynchronous inference. When asynchronous inference, `--causal_block_size 5` is recommended while it is not supposed to be set for synchronous generation. REMEMBER that the frame latent number inputted into the model in every iteration, e.g., base frame latent number (e.g., (97-1)//4+1=25 for base_num_frames=97) and (e.g., (237-97-(97-17)x1+17-1)//4+1=20 for base_num_frames=97, num_frames=237, overlap_history=17) for the last iteration, MUST be divided by causal_block_size. If you find it too hard to calculate and set proper values, just use our recommended setting above :). Asynchronous inference will take more steps to diffuse the whole sequence which means it will be SLOWER than synchronous mode. In our experiments, asynchronous inference may improve the instruction following and visual consistent performance.
> - To reduce peak VRAM, just lower the `--base_num_frames`, e.g., to 77 or 57, while keeping the same generative length `--num_frames` you want to generate. This may slightly reduce video quality, and it should not be set too small.
> - `--addnoise_condition` is used to help smooth the long video generation by adding some noise to the clean condition. Too large noise can cause the inconsistency as well. 20 is a recommended value, and you may try larger ones, but it is recommended to not exceed 50.
> - Generating a 540P video using the 1.3B model requires approximately 14.7GB peak VRAM, while the same resolution video using the 14B model demands around 51.2GB peak VRAM.
- **Text To Video & Image To Video**
```shell
# run Text-to-Video Generation
model_id=Skywork/SkyReels-V2-T2V-14B-540P
python3 generate_video.py \
--model_id ${model_id} \
--resolution 540P \
--num_frames 97 \
--guidance_scale 6.0 \
--shift 8.0 \
--fps 24 \
--prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface." \
--offload \
--teacache \
--use_ret_steps \
--teacache_thresh 0.3
```
> **Note**:
> - When using an **image-to-video (I2V)** model, you must provide an input image using the `--image ${image_path}` parameter. The `--guidance_scale 5.0` and `--shift 3.0` is recommended for I2V model.
> - Generating a 540P video using the 1.3B model requires approximately 14.7GB peak VRAM, while the same resolution video using the 14B model demands around 43.4GB peak VRAM.
- **Prompt Enhancer**
The prompt enhancer is implemented based on <a href="https://huggingface.co/Qwen/Qwen2.5-32B-Instruct">Qwen2.5-32B-Instruct</a> and is utilized via the `--prompt_enhancer` parameter. It works ideally for short prompts, while for long prompts, it might generate an excessively lengthy prompt that could lead to over-saturation in the generative video. Note the peak memory of GPU is 64G+ if you use `--prompt_enhancer`. If you want to obtain the enhanced prompt separately, you can also run the prompt_enhancer script separately for testing. The steps are as follows:
```shell
cd skyreels_v2_infer/pipelines
python3 prompt_enhancer.py --prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface."
```
> **Note**:
> - `--prompt_enhancer` is not allowed if using `--use_usp`. We recommend running the skyreels_v2_infer/pipelines/prompt_enhancer.py script first to generate enhanced prompt before enabling the `--use_usp` parameter.
**Advanced Configuration Options**
Below are the key parameters you can customize for video generation:
| Parameter | Recommended Value | Description |
|:----------------------:|:---------:|:-----------------------------------------:|
| --prompt | | Text description for generating your video |
| --image | | Path to input image for image-to-video generation |
| --resolution | 540P or 720P | Output video resolution (select based on model type) |
| --num_frames | 97 or 121 | Total frames to generate (**97 for 540P models**, **121 for 720P models**) |
| --inference_steps | 50 | Number of denoising steps |
| --fps | 24 | Frames per second in the output video |
| --shift | 8.0 or 5.0 | Flow matching scheduler parameter (**8.0 for T2V**, **5.0 for I2V**) |
| --guidance_scale | 6.0 or 5.0 | Controls text adherence strength (**6.0 for T2V**, **5.0 for I2V**) |
| --seed | | Fixed seed for reproducible results (omit for random generation) |
| --offload | True | Offloads model components to CPU to reduce VRAM usage (recommended) |
| --use_usp | True | Enables multi-GPU acceleration with xDiT USP |
| --outdir | ./video_out | Directory where generated videos will be saved |
| --prompt_enhancer | True | Expand the prompt into a more detailed description |
| --teacache | False | Enables teacache for faster inference |
| --teacache_thresh | 0.2 | Higher speedup will cause to worse quality |
| --use_ret_steps | False | Retention Steps for teacache |
**Diffusion Forcing Additional Parameters**
| Parameter | Recommended Value | Description |
|:----------------------:|:---------:|:-----------------------------------------:|
| --ar_step | 0 | Controls asynchronous inference (0 for synchronous mode) |
| --base_num_frames | 97 or 121 | Base frame count (**97 for 540P**, **121 for 720P**) |
| --overlap_history | 17 | Number of frames to overlap for smooth transitions in long videos |
| --addnoise_condition | 20 | Improves consistency in long video generation |
| --causal_block_size | 5 | Recommended when using asynchronous inference (--ar_step > 0) |
#### Multi-GPU inference using xDiT USP
We use [xDiT](https://github.com/xdit-project/xDiT) USP to accelerate inference. For example, to generate a video with 2 GPUs, you can use the following command:
- **Diffusion Forcing**
```shell
model_id=Skywork/SkyReels-V2-DF-14B-540P
# diffusion forcing synchronous inference
torchrun --nproc_per_node=2 generate_video_df.py \
--model_id ${model_id} \
--resolution 540P \
--ar_step 0 \
--base_num_frames 97 \
--num_frames 257 \
--overlap_history 17 \
--prompt "A graceful white swan with a curved neck and delicate feathers swimming in a serene lake at dawn, its reflection perfectly mirrored in the still water as mist rises from the surface, with the swan occasionally dipping its head into the water to feed." \
--addnoise_condition 20 \
--use_usp \
--offload \
--seed 42
```
- **Text To Video & Image To Video**
```shell
# run Text-to-Video Generation
model_id=Skywork/SkyReels-V2-T2V-14B-540P
torchrun --nproc_per_node=2 generate_video.py \
--model_id ${model_id} \
--resolution 540P \
--num_frames 97 \
--guidance_scale 6.0 \
--shift 8.0 \
--fps 24 \
--offload \
--prompt "A serene lake surrounded by towering mountains, with a few swans gracefully gliding across the water and sunlight dancing on the surface." \
--use_usp \
--seed 42
```
> **Note**:
> - When using an **image-to-video (I2V)** model, you must provide an input image using the `--image ${image_path}` parameter. The `--guidance_scale 5.0` and `--shift 3.0` is recommended for I2V model.
## Contents
- [Abstract](#abstract)
- [Methodology of SkyReels-V2](#methodology-of-skyreels-v2)
- [Key Contributions of SkyReels-V2](#key-contributions-of-skyreels-v2)
- [Video Captioner](#video-captioner)
- [Reinforcement Learning](#reinforcement-learning)
- [Diffusion Forcing](#diffusion-forcing)
- [High-Quality Supervised Fine-Tuning(SFT)](#high-quality-supervised-fine-tuning-sft)
- [Performance](#performance)
- [Acknowledgements](#acknowledgements)
- [Citation](#citation)
---
## Abstract
Recent advances in video generation have been driven by diffusion models and autoregressive frameworks, yet critical challenges persist in harmonizing prompt adherence, visual quality, motion dynamics, and duration: compromises in motion dynamics to enhance temporal visual quality, constrained video duration (5-10 seconds) to prioritize resolution, and inadequate shot-aware generation stemming from general-purpose MLLMs' inability to interpret cinematic grammar, such as shot composition, actor expressions, and camera motions. These intertwined limitations hinder realistic long-form synthesis and professional film-style generation.
To address these limitations, we introduce SkyReels-V2, the world's first infinite-length film generative model using a Diffusion Forcing framework. Our approach synergizes Multi-modal Large Language Models (MLLM), Multi-stage Pretraining, Reinforcement Learning, and Diffusion Forcing techniques to achieve comprehensive optimization. Beyond its technical innovations, SkyReels-V2 enables multiple practical applications, including Story Generation, Image-to-Video Synthesis, Camera Director functionality, and multi-subject consistent video generation through our <a href="https://github.com/SkyworkAI/SkyReels-A2">Skyreels-A2</a> system.
## Methodology of SkyReels-V2
The SkyReels-V2 methodology consists of several interconnected components. It starts with a comprehensive data processing pipeline that prepares various quality training data. At its core is the Video Captioner architecture, which provides detailed annotations for video content. The system employs a multi-task pretraining strategy to build fundamental video generation capabilities. Post-training optimization includes Reinforcement Learning to enhance motion quality, Diffusion Forcing Training for generating extended videos, and High-quality Supervised Fine-Tuning (SFT) stages for visual refinement. The model runs on optimized computational infrastructure for efficient training and inference. SkyReels-V2 supports multiple applications, including Story Generation, Image-to-Video Synthesis, Camera Director functionality, and Elements-to-Video Generation.
<p align="center">
<img src="assets/main_pipeline.jpg" alt="mainpipeline" width="100%">
</p>
## Key Contributions of SkyReels-V2
#### Video Captioner
<a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> serves as our video captioning model for data annotation. This model is trained on the captioning result from the base model <a href="https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct">Qwen2.5-VL-72B-Instruct</a> and the sub-expert captioners on a balanced video data. The balanced video data is a carefully curated dataset of approximately 2 million videos to ensure conceptual balance and annotation quality. Built upon the <a href="https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct">Qwen2.5-VL-7B-Instruct</a> foundation model, <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> is fine-tuned to enhance performance in domain-specific video captioning tasks. To compare the performance with the SOTA models, we conducted a manual assessment of accuracy across different captioning fields using a test set of 1,000 samples. The proposed <a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</a> achieves the highest average accuracy among the baseline models, and show a dramatic result in the shot related fields
<p align="center">
<table align="center">
<thead>
<tr>
<th>model</th>
<th><a href="https://huggingface.co/Qwen/Qwen2.5-VL-7B-Instruct">Qwen2.5-VL-7B-Ins.</a></th>
<th><a href="https://huggingface.co/Qwen/Qwen2.5-VL-72B-Instruct">Qwen2.5-VL-72B-Ins.</a></th>
<th><a href="https://huggingface.co/omni-research/Tarsier2-Recap-7b">Tarsier2-Recap-7b</a></th>
<th><a href="https://huggingface.co/Skywork/SkyCaptioner-V1">SkyCaptioner-V1</th>
</tr>
</thead>
<tbody>
<tr>
<td>Avg accuracy</td>
<td>51.4%</td>
<td>58.7%</td>
<td>49.4%</td>
<td><strong>76.3%</strong></td>
</tr>
<tr>
<td>shot type</td>
<td>76.8%</td>
<td>82.5%</td>
<td>60.2%</td>
<td><strong>93.7%</strong></td>
</tr>
<tr>
<td>shot angle</td>
<td>60.0%</td>
<td>73.7%</td>
<td>52.4%</td>
<td><strong>89.8%</strong></td>
</tr>
<tr>
<td>shot position</td>
<td>28.4%</td>
<td>32.7%</td>
<td>23.6%</td>
<td><strong>83.1%</strong></td>
</tr>
<tr>
<td>camera motion</td>
<td>62.0%</td>
<td>61.2%</td>
<td>45.3%</td>
<td><strong>85.3%</strong></td>
</tr>
<tr>
<td>expression</td>
<td>43.6%</td>
<td>51.5%</td>
<td>54.3%</td>
<td><strong>68.8%</strong></td>
</tr>
<tr>
<td colspan="5" style="text-align: center; border-bottom: 1px solid #ddd; padding: 8px;"></td>
</tr>
<tr>
<td>TYPES_type</td>
<td>43.5%</td>
<td>49.7%</td>
<td>47.6%</td>
<td><strong>82.5%</strong></td>
</tr>
<tr>
<td>TYPES_sub_type</td>
<td>38.9%</td>
<td>44.9%</td>
<td>45.9%</td>
<td><strong>75.4%</strong></td>
</tr>
<tr>
<td>appearance</td>
<td>40.9%</td>
<td>52.0%</td>
<td>45.6%</td>
<td><strong>59.3%</strong></td>
</tr>
<tr>
<td>action</td>
<td>32.4%</td>
<td>52.0%</td>
<td><strong>69.8%</strong></td>
<td>68.8%</td>
</tr>
<tr>
<td>position</td>
<td>35.4%</td>
<td>48.6%</td>
<td>45.5%</td>
<td><strong>57.5%</strong></td>
</tr>
<tr>
<td>is_main_subject</td>
<td>58.5%</td>
<td>68.7%</td>
<td>69.7%</td>
<td><strong>80.9%</strong></td>
</tr>
<tr>
<td>environment</td>
<td>70.4%</td>
<td><strong>72.7%</strong></td>
<td>61.4%</td>
<td>70.5%</td>
</tr>
<tr>
<td>lighting</td>
<td>77.1%</td>
<td><strong>80.0%</strong></td>
<td>21.2%</td>
<td>76.5%</td>
</tr>
</tbody>
</table>
</p>
#### Reinforcement Learning
Inspired by the previous success in LLM, we propose to enhance the performance of the generative model by Reinforcement Learning. Specifically, we focus on the motion quality because we find that the main drawback of our generative model is:
- the generative model does not handle well with large, deformable motions.
- the generated videos may violate the physical law.
To avoid the degradation in other metrics, such as text alignment and video quality, we ensure the preference data pairs have comparable text alignment and video quality, while only the motion quality varies. This requirement poses greater challenges in obtaining preference annotations due to the inherently higher costs of human annotation. To address this challenge, we propose a semi-automatic pipeline that strategically combines automatically generated motion pairs and human annotation results. This hybrid approach not only enhances the data scale but also improves alignment with human preferences through curated quality control. Leveraging this enhanced dataset, we first train a specialized reward model to capture the generic motion quality differences between paired samples. This learned reward function subsequently guides the sample selection process for Direct Preference Optimization (DPO), enhancing the motion quality of the generative model.
#### Diffusion Forcing
We introduce the Diffusion Forcing Transformer to unlock our model’s ability to generate long videos. Diffusion Forcing is a training and sampling strategy where each token is assigned an independent noise level. This allows tokens to be denoised according to arbitrary, per-token schedules. Conceptually, this approach functions as a form of partial masking: a token with zero noise is fully unmasked, while complete noise fully masks it. Diffusion Forcing trains the model to "unmask" any combination of variably noised tokens, using the cleaner tokens as conditional information to guide the recovery of noisy ones. Building on this, our Diffusion Forcing Transformer can extend video generation indefinitely based on the last frames of the previous segment. Note that the synchronous full sequence diffusion is a special case of Diffusion Forcing, where all tokens share the same noise level. This relationship allows us to fine-tune the Diffusion Forcing Transformer from a full-sequence diffusion model.
#### High-Quality Supervised Fine-Tuning (SFT)
We implement two sequential high-quality supervised fine-tuning (SFT) stages at 540p and 720p resolutions respectively, with the initial SFT phase conducted immediately after pretraining but prior to reinforcement learning (RL) stage.This first-stage SFT serves as a conceptual equilibrium trainer, building upon the foundation model’s pretraining outcomes that utilized only fps24 video data, while strategically removing FPS embedding components to streamline thearchitecture. Trained with the high-quality concept-balanced samples, this phase establishes optimized initialization parameters for subsequent training processes. Following this, we execute a secondary high-resolution SFT at 720p after completing the diffusion forcing stage, incorporating identical loss formulations and the higher-quality concept-balanced datasets by the manually filter. This final refinement phase focuses on resolution increase such that the overall video quality will be further enhanced.
## Performance
To comprehensively evaluate our proposed method, we construct the SkyReels-Bench for human assessment and leveraged the open-source <a href="https://github.com/Vchitect/VBench">V-Bench</a> for automated evaluation. This allows us to compare our model with the state-of-the-art (SOTA) baselines, including both open-source and proprietary models.
#### Human Evaluation
For human evaluation, we design SkyReels-Bench with 1,020 text prompts, systematically assessing three dimensions: Instruction Adherence, Motion Quality, Consistency and Visual Quality. This benchmark is designed to evaluate both text-to-video (T2V) and image-to-video (I2V) generation models, providing comprehensive assessment across different generation paradigms. To ensure fairness, all models were evaluated under default settings with consistent resolutions, and no post-generation filtering was applied.
- Text To Video Models
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model Name</th>
<th>Average</th>
<th>Instruction Adherence</th>
<th>Consistency</th>
<th>Visual Quality</th>
<th>Motion Quality</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://runwayml.com/research/introducing-gen-3-alpha">Runway-Gen3 Alpha</a></td>
<td>2.53</td>
<td>2.19</td>
<td>2.57</td>
<td>3.23</td>
<td>2.11</td>
</tr>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>2.82</td>
<td>2.64</td>
<td>2.81</td>
<td>3.20</td>
<td>2.61</td>
</tr>
<tr>
<td><a href="https://klingai.com">Kling-1.6 STD Mode</a></td>
<td>2.99</td>
<td>2.77</td>
<td>3.05</td>
<td>3.39</td>
<td><strong>2.76</strong></td>
</tr>
<tr>
<td><a href="https://hailuoai.video">Hailuo-01</a></td>
<td>3.0</td>
<td>2.8</td>
<td>3.08</td>
<td>3.29</td>
<td>2.74</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>3.12</td>
<td>2.91</td>
<td>3.31</td>
<td><strong>3.54</strong></td>
<td>2.71</td>
</tr>
<tr>
<td>SkyReels-V2</td>
<td><strong>3.14</strong></td>
<td><strong>3.15</strong></td>
<td><strong>3.35</strong></td>
<td>3.34</td>
<td>2.74</td>
</tr>
</tbody>
</table>
</p>
The evaluation demonstrates that our model achieves significant advancements in **instruction adherence (3.15)** compared to baseline methods, while maintaining competitive performance in **motion quality (2.74)** without sacrificing the **consistency (3.35)**.
- Image To Video Models
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model</th>
<th>Average</th>
<th>Instruction Adherence</th>
<th>Consistency</th>
<th>Visual Quality</th>
<th>Motion Quality</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>2.84</td>
<td>2.97</td>
<td>2.95</td>
<td>2.87</td>
<td>2.56</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>2.85</td>
<td>3.10</td>
<td>2.81</td>
<td>3.00</td>
<td>2.48</td>
</tr>
<tr>
<td><a href="https://hailuoai.video">Hailuo-01</a></td>
<td>3.05</td>
<td>3.31</td>
<td>2.58</td>
<td>3.55</td>
<td>2.74</td>
</tr>
<tr>
<td><a href="https://klingai.com">Kling-1.6 Pro Mode</a></td>
<td>3.4</td>
<td>3.56</td>
<td>3.03</td>
<td>3.58</td>
<td>3.41</td>
</tr>
<tr>
<td><a href="https://runwayml.com/research/introducing-runway-gen-4">Runway-Gen4</a></td>
<td>3.39</td>
<td>3.75</td>
<td>3.2</td>
<td>3.4</td>
<td>3.37</td>
</tr>
<tr>
<td>SkyReels-V2-DF</td>
<td>3.24</td>
<td>3.64</td>
<td>3.21</td>
<td>3.18</td>
<td>2.93</td>
</tr>
<tr>
<td>SkyReels-V2-I2V</td>
<td>3.29</td>
<td>3.42</td>
<td>3.18</td>
<td>3.56</td>
<td>3.01</td>
</tr>
</tbody>
</table>
</p>
Our results demonstrate that both **SkyReels-V2-I2V (3.29)** and **SkyReels-V2-DF (3.24)** achieve state-of-the-art performance among open-source models, significantly outperforming HunyuanVideo-13B (2.84) and Wan2.1-14B (2.85) across all quality dimensions. With an average score of 3.29, SkyReels-V2-I2V demonstrates comparable performance to proprietary models Kling-1.6 (3.4) and Runway-Gen4 (3.39).
#### VBench
To objectively compare SkyReels-V2 Model against other leading open-source Text-To-Video models, we conduct comprehensive evaluations using the public benchmark <a href="https://github.com/Vchitect/VBench">V-Bench</a>. Our evaluation specifically leverages the benchmark’s longer version prompt. For fair comparison with baseline models, we strictly follow their recommended setting for inference.
<p align="center">
<table align="center">
<thead>
<tr>
<th>Model</th>
<th>Total Score</th>
<th>Quality Score</th>
<th>Semantic Score</th>
</tr>
</thead>
<tbody>
<tr>
<td><a href="https://github.com/hpcaitech/Open-Sora">OpenSora 2.0</a></td>
<td>81.5 %</td>
<td>82.1 %</td>
<td>78.2 %</td>
</tr>
<tr>
<td><a href="https://github.com/THUDM/CogVideo">CogVideoX1.5-5B</a></td>
<td>80.3 %</td>
<td>80.9 %</td>
<td>77.9 %</td>
</tr>
<tr>
<td><a href="https://github.com/Tencent/HunyuanVideo">HunyuanVideo-13B</a></td>
<td>82.7 %</td>
<td>84.4 %</td>
<td>76.2 %</td>
</tr>
<tr>
<td><a href="https://github.com/Wan-Video/Wan2.1">Wan2.1-14B</a></td>
<td>83.7 %</td>
<td>84.2 %</td>
<td><strong>81.4 %</strong></td>
</tr>
<tr>
<td>SkyReels-V2</td>
<td><strong>83.9 %</strong></td>
<td><strong>84.7 %</strong></td>
<td>80.8 %</td>
</tr>
</tbody>
</table>
</p>
The VBench results demonstrate that SkyReels-V2 outperforms all compared models including HunyuanVideo-13B and Wan2.1-14B, With the highest **total score (83.9%)** and **quality score (84.7%)**. In this evaluation, the semantic score is slightly lower than Wan2.1-14B, while we outperform Wan2.1-14B in human evaluations, with the primary gap attributed to V-Bench’s insufficient evaluation of shot-scenario semantic adherence.
## Acknowledgements
We would like to thank the contributors of <a href="https://github.com/Wan-Video/Wan2.1">Wan 2.1</a>, <a href="https://github.com/xdit-project/xDiT">XDit</a> and <a href="https://qwenlm.github.io/blog/qwen2.5/">Qwen 2.5</a> repositories, for their open research and contributions.
## Citation
```bibtex
@misc{chen2025skyreelsv2infinitelengthfilmgenerative,
title={SkyReels-V2: Infinite-length Film Generative Model},
author={Guibin Chen and Dixuan Lin and Jiangping Yang and Chunze Lin and Junchen Zhu and Mingyuan Fan and Hao Zhang and Sheng Chen and Zheng Chen and Chengcheng Ma and Weiming Xiong and Wei Wang and Nuo Pang and Kang Kang and Zhiheng Xu and Yuzhe Jin and Yupeng Liang and Yubing Song and Peng Zhao and Boyuan Xu and Di Qiu and Debang Li and Zhengcong Fei and Yang Li and Yahui Zhou},
year={2025},
eprint={2504.13074},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.13074},
}
```
|
gradientrouting-spar/vertical_2_proxy_ntrain_25_ntrig_9_negative_seed_1_seed_25_seed_2_seed_42_20250614_073242
|
gradientrouting-spar
| 2025-06-14T07:41:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T07:41:43Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
19uez/DPO_llama3_2_3B_16_005_2k_new
|
19uez
| 2025-06-14T07:40:13Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:19uez/llama_3b_sft_best",
"base_model:finetune:19uez/llama_3b_sft_best",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-14T07:38:41Z |
---
base_model: 19uez/llama_3b_sft_best
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** 19uez
- **License:** apache-2.0
- **Finetuned from model :** 19uez/llama_3b_sft_best
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dgiang02/Qwen25_15B_SFT_best_again
|
dgiang02
| 2025-06-14T07:37:21Z | 28 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/Qwen2.5-1.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-1.5B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-14T07:36:43Z |
---
base_model: unsloth/Qwen2.5-1.5B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** dgiang02
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen2.5-1.5B-Instruct
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
gradientrouting-spar/gcd_syco_cap_math_positive_neg_prx_lambda_proxy-0.1_seed_42
|
gradientrouting-spar
| 2025-06-14T07:36:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T07:36:18Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Disya/Mistral-qwq-12b-merge-4bit
|
Disya
| 2025-06-14T07:33:02Z | 4 | 0 | null |
[
"safetensors",
"mistral",
"base_model:Disya/Mistral-qwq-12b-merge",
"base_model:quantized:Disya/Mistral-qwq-12b-merge",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-06-14T07:23:44Z |
---
base_model:
- Disya/Mistral-qwq-12b-merge
---
|
gokulsrinivasagan/tinybert_base_train_book_ent_15p_s_init_kd_a_in
|
gokulsrinivasagan
| 2025-06-14T07:28:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"dataset:gokulsrinivasagan/processed_wikitext-103-raw-v1-ld",
"base_model:google/bert_uncased_L-4_H-512_A-8",
"base_model:finetune:google/bert_uncased_L-4_H-512_A-8",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-06-13T18:08:31Z |
---
library_name: transformers
license: apache-2.0
base_model: google/bert_uncased_L-4_H-512_A-8
tags:
- generated_from_trainer
datasets:
- gokulsrinivasagan/processed_wikitext-103-raw-v1-ld
metrics:
- accuracy
model-index:
- name: tinybert_base_train_book_ent_15p_s_init_kd_a_in
results:
- task:
name: Masked Language Modeling
type: fill-mask
dataset:
name: gokulsrinivasagan/processed_wikitext-103-raw-v1-ld
type: gokulsrinivasagan/processed_wikitext-103-raw-v1-ld
metrics:
- name: Accuracy
type: accuracy
value: 0.0738009381015644
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinybert_base_train_book_ent_15p_s_init_kd_a_in
This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the gokulsrinivasagan/processed_wikitext-103-raw-v1-ld dataset.
It achieves the following results on the evaluation set:
- Loss: 105.8526
- Accuracy: 0.0738
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 24
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:-----:|:---------------:|:--------:|
| 471.4998 | 4.1982 | 10000 | 480.2915 | 0.0026 |
| 191.4094 | 8.3963 | 20000 | 186.4382 | 0.0647 |
| 141.2722 | 12.5945 | 30000 | 130.5471 | 0.0694 |
| 123.131 | 16.7926 | 40000 | 113.4914 | 0.0719 |
| 115.2744 | 20.9908 | 50000 | 105.8526 | 0.0738 |
### Framework versions
- Transformers 4.51.2
- Pytorch 2.6.0+cu126
- Datasets 3.5.0
- Tokenizers 0.21.1
|
gradientrouting-spar/gcd_syco_cap_math_positive_neg_prx_lambda_proxy-0.8_seed_5
|
gradientrouting-spar
| 2025-06-14T07:25:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T07:25:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gradientrouting-spar/vertical_2_proxy_ntrain_25_ntrig_9_negative_seed_1_seed_25_20250614_071407
|
gradientrouting-spar
| 2025-06-14T07:23:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T07:23:11Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gradientrouting-spar/gcd_syco_cap_math_positive_neg_prx_lambda_proxy-10.0_seed_42
|
gradientrouting-spar
| 2025-06-14T07:19:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T07:19:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
VIDEOS-18-Shubham-gupta-viral-Video-link/New.tutorial.Shubham.gupta.Viral.Video.Leaks.Official
|
VIDEOS-18-Shubham-gupta-viral-Video-link
| 2025-06-14T07:19:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-14T07:19:10Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
rinkhanh000/Qwen2.5VL-7B_ViMemeCap
|
rinkhanh000
| 2025-06-14T07:19:20Z | 0 | 0 | null |
[
"safetensors",
"qwen2_5_vl",
"license:apache-2.0",
"region:us"
] | null | 2025-06-14T06:42:01Z |
---
license: apache-2.0
---
|
gradientrouting-spar/gcd_syco_cap_math_positive_neg_prx_lambda_proxy-10.0_seed_5
|
gradientrouting-spar
| 2025-06-14T07:17:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T07:16:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
7-VIDEOS-18-Katrina-Lim-viral-videos/New.tutorial.Katrina.Lim.Viral.Video.Leaks.Official
|
7-VIDEOS-18-Katrina-Lim-viral-videos
| 2025-06-14T07:14:21Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-14T07:13:36Z |
<a href="https://mswds.xyz/full-video/?v=Katrina.Lim" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a>
<a href="https://mswds.xyz/full-video/?v=Katrina.Lim" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 Viral 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a>
<a href="https://mswds.xyz/full-video/?v=Katrina.Lim"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsgd" /></a>
|
gradientrouting-spar/vertical_2_proxy_ntrain_25_ntrig_9_negative_seed_1_20250614_070439
|
gradientrouting-spar
| 2025-06-14T07:13:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T07:13:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Yukang/Qwen2.5-3B-Open-R1-Code-GRPO
|
Yukang
| 2025-06-14T07:12:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:open-r1/verifiable-coding-problems-python",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-13T21:30:01Z |
---
base_model: Qwen/Qwen2.5-3B-Instruct
datasets: open-r1/verifiable-coding-problems-python
library_name: transformers
model_name: Qwen2.5-3B-Open-R1-Code-GRPO
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-3B-Open-R1-Code-GRPO
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the [open-r1/verifiable-coding-problems-python](https://huggingface.co/datasets/open-r1/verifiable-coding-problems-python) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Yukang/Qwen2.5-3B-Open-R1-Code-GRPO", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenyukang2020-nvidia/huggingface/runs/bds2ccjy)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0
- Transformers: 4.52.3
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
gradientrouting-spar/vertical_2_proxy_ntrain_25_ntrig_9_random_seed_1_seed_25_seed_2_seed_42_20250614_065509
|
gradientrouting-spar
| 2025-06-14T07:04:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T07:04:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gradientrouting-spar/gcd_syco_cap_math_safe_lora_safe_lora_num_proj_layers-100_safe_lora_threshold-0.99_seed_5
|
gradientrouting-spar
| 2025-06-14T07:04:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T07:03:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gradientrouting-spar/gcd_syco_cap_math_safe_lora_safe_lora_num_proj_layers-100_safe_lora_threshold-0.99_seed_1
|
gradientrouting-spar
| 2025-06-14T07:01:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T07:00:57Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
John6666/uwazumimix-25d-prony-v10-sdxl
|
John6666
| 2025-06-14T07:01:10Z | 2 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"realistic",
"photorealistic",
"asian",
"pony",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-14T06:55:09Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- realistic
- photorealistic
- asian
- pony
---
Original model is [here](https://civitai.com/models/1678041/uwazumimix-25dprony?modelVersionId=1899272).
This model created by [wagalipagirl](https://civitai.com/user/wagalipagirl).
|
gradientrouting-spar/vertical_2_proxy_ntrain_25_ntrig_9_random_seed_1_seed_25_seed_2_20250614_064543
|
gradientrouting-spar
| 2025-06-14T06:55:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T06:54:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
BootesVoid/cmbk9tcyp0d2rkfxswp2wwpxa_cmbvv0sfe0273woixtv7g2nx0
|
BootesVoid
| 2025-06-14T06:54:39Z | 4 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-14T06:54:38Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: SISI
---
# Cmbk9Tcyp0D2Rkfxswp2Wwpxa_Cmbvv0Sfe0273Woixtv7G2Nx0
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `SISI` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "SISI",
"lora_weights": "https://huggingface.co/BootesVoid/cmbk9tcyp0d2rkfxswp2wwpxa_cmbvv0sfe0273woixtv7g2nx0/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmbk9tcyp0d2rkfxswp2wwpxa_cmbvv0sfe0273woixtv7g2nx0', weight_name='lora.safetensors')
image = pipeline('SISI').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmbk9tcyp0d2rkfxswp2wwpxa_cmbvv0sfe0273woixtv7g2nx0/discussions) to add images that show off what you’ve made with this LoRA.
|
xfddlm/transformer
|
xfddlm
| 2025-06-14T06:52:11Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-14T05:42:22Z |
---
license: apache-2.0
---
|
John6666/shieldspell-v10-sdxl
|
John6666
| 2025-06-14T06:49:18Z | 0 | 0 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"stable-diffusion-xl",
"anime",
"girls",
"illustrious",
"en",
"base_model:OnomaAIResearch/Illustrious-xl-early-release-v0",
"base_model:finetune:OnomaAIResearch/Illustrious-xl-early-release-v0",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-14T06:43:42Z |
---
license: other
license_name: faipl-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
language:
- en
library_name: diffusers
pipeline_tag: text-to-image
tags:
- text-to-image
- stable-diffusion
- stable-diffusion-xl
- anime
- girls
- illustrious
base_model: OnomaAIResearch/Illustrious-xl-early-release-v0
---
Original model is [here](https://civitai.com/models/1678538/shieldspell?modelVersionId=1899820).
This model created by [Dark_Schneider](https://civitai.com/user/Dark_Schneider).
|
Somalitts/34mines
|
Somalitts
| 2025-06-14T06:47:47Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"base_model:microsoft/speecht5_tts",
"base_model:finetune:microsoft/speecht5_tts",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-to-audio
| 2025-06-14T06:33:07Z |
---
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
- generated_from_trainer
model-index:
- name: 34mines
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 34mines
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1094
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3644 | 0.3951 | 100 | 0.2608 |
| 0.237 | 0.7901 | 200 | 0.1931 |
| 0.1919 | 1.1817 | 300 | 0.1452 |
| 0.1618 | 1.5768 | 400 | 0.1167 |
| 0.1574 | 1.9719 | 500 | 0.1094 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1
|
gradientrouting-spar/vertical_2_proxy_ntrain_25_ntrig_9_random_seed_1_seed_25_20250614_063605
|
gradientrouting-spar
| 2025-06-14T06:45:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T06:45:10Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
yinita/cpdc-qwen14-base-maintask2-v0614-v5-lora-cp-sync-by-lian-5epoch
|
yinita
| 2025-06-14T06:38:11Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-14T06:34:49Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Entropicengine/LatentDream-exp-delta-8b
|
Entropicengine
| 2025-06-14T06:33:33Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2403.19522",
"base_model:DreadPoor/Suavemente-8B-Model_Stock",
"base_model:merge:DreadPoor/Suavemente-8B-Model_Stock",
"base_model:Sao10K/L3-8B-Lunaris-v1",
"base_model:merge:Sao10K/L3-8B-Lunaris-v1",
"base_model:Sao10K/L3-8B-Stheno-v3.2",
"base_model:merge:Sao10K/L3-8B-Stheno-v3.2",
"base_model:SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B",
"base_model:merge:SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B",
"base_model:arcee-ai/Llama-3.1-SuperNova-Lite",
"base_model:merge:arcee-ai/Llama-3.1-SuperNova-Lite",
"base_model:kloodia/lora-8b-code",
"base_model:merge:kloodia/lora-8b-code",
"base_model:kloodia/lora-8b-medic",
"base_model:merge:kloodia/lora-8b-medic",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-14T06:25:55Z |
---
base_model:
- Sao10K/L3-8B-Lunaris-v1
- SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B
- DreadPoor/Suavemente-8B-Model_Stock
- kloodia/lora-8b-medic
- Sao10K/L3-8B-Stheno-v3.2
- kloodia/lora-8b-code
- arcee-ai/Llama-3.1-SuperNova-Lite
library_name: transformers
tags:
- mergekit
- merge
---

# LatentDream-exp-Δ-8b
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B](https://huggingface.co/SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B) as a base.
### Models Merged
The following models were included in the merge:
* [Sao10K/L3-8B-Lunaris-v1](https://huggingface.co/Sao10K/L3-8B-Lunaris-v1)
* [DreadPoor/Suavemente-8B-Model_Stock](https://huggingface.co/DreadPoor/Suavemente-8B-Model_Stock) + [kloodia/lora-8b-medic](https://huggingface.co/kloodia/lora-8b-medic)
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) + [kloodia/lora-8b-code](https://huggingface.co/kloodia/lora-8b-code)
* [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B
dtype: bfloat16
merge_method: model_stock
modules:
default:
slices:
- sources:
- layer_range: [0, 32]
model: Sao10K/L3-8B-Stheno-v3.2+kloodia/lora-8b-code
- layer_range: [0, 32]
model: arcee-ai/Llama-3.1-SuperNova-Lite
- layer_range: [0, 32]
model: Sao10K/L3-8B-Lunaris-v1
- layer_range: [0, 32]
model: SentientAGI/Dobby-Mini-Unhinged-Llama-3.1-8B
- layer_range: [0, 32]
model: DreadPoor/Suavemente-8B-Model_Stock+kloodia/lora-8b-medic
```
|
jovemexausto/setfit-xlmr-emotions
|
jovemexausto
| 2025-06-14T06:26:32Z | 0 | 0 |
setfit
|
[
"setfit",
"safetensors",
"xlm-roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/stsb-xlm-r-multilingual",
"base_model:finetune:sentence-transformers/stsb-xlm-r-multilingual",
"model-index",
"region:us"
] |
text-classification
| 2025-06-14T06:25:23Z |
---
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: is flat.
- text: may offend viewers not amused by the sick sense of humor.
- text: more evil than ever
- text: the movie is hindered by a central plot that's peppered with false starts
and populated by characters who are nearly impossible to care about.
- text: there is plenty of room for editing, and a much shorter cut surely would have
resulted in a smoother, more focused narrative without sacrificing any of the
cultural intrigue.
metrics:
- accuracy
pipeline_tag: text-classification
library_name: setfit
inference: true
base_model: sentence-transformers/stsb-xlm-r-multilingual
model-index:
- name: SetFit with sentence-transformers/stsb-xlm-r-multilingual
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.4645061728395062
name: Accuracy
---
# SetFit with sentence-transformers/stsb-xlm-r-multilingual
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/stsb-xlm-r-multilingual](https://huggingface.co/sentence-transformers/stsb-xlm-r-multilingual) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/stsb-xlm-r-multilingual](https://huggingface.co/sentence-transformers/stsb-xlm-r-multilingual)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 128 tokens
- **Number of Classes:** 12 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:---------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| optimism | <ul><li>"in spite of all that he's witnessed, remains surprisingly idealistic"</li><li>"but on the whole, you're gon na like this movie."</li><li>'phrase life affirming'</li></ul> |
| discomfort | <ul><li>'visceral and dangerously honest revelations about the men and machines behind the curtains of our planet'</li><li>'the subtlest and most complexly evil uncle ralph'</li><li>'the characters, cast in impossibly contrived situations, are totally estranged from reality.'</li></ul> |
| anger | <ul><li>'crime drama'</li><li>'a gritty police thriller with all the dysfunctional family dynamics one could wish for'</li><li>'atrocities'</li></ul> |
| joy | <ul><li>'nice'</li><li>'sets itself apart by forming a chain of relationships that come full circle to end on a positive (if tragic) note'</li><li>'earnest movie'</li></ul> |
| sadness | <ul><li>'consumed by lust and love and crushed by betrayal that it conjures up the intoxicating fumes and emotional ghosts of a freshly painted rembrandt'</li><li>'has the stomach-knotting suspense of a legal thriller, while the testimony of witnesses lends the film a resonant undertone of tragedy'</li><li>'a bittersweet drama about the limbo of grief and how truth-telling can open the door to liberation.'</li></ul> |
| neutral | <ul><li>'bound'</li><li>'feels impersonal, almost generic.'</li><li>'gr'</li></ul> |
| disappointment | <ul><li>'his fake backdrops'</li><li>'i did go back and check out the last 10 minutes, but these were more repulsive than the first 30 or 40 minutes.'</li><li>"though many of the actors throw off a spark or two when they first appear, they can'tgenerate enough heat in this cold vacuum of a comedy to start a reaction."</li></ul> |
| frustration | <ul><li>'just such a dungpile'</li><li>'of this tacky nonsense'</li><li>'exactly how bad it is'</li></ul> |
| admiration | <ul><li>'of extraordinary journalism'</li><li>'classical actress'</li><li>'does its predecessors proud.'</li></ul> |
| excitement | <ul><li>'some strong supporting players'</li><li>'outrageously creative action'</li><li>'gangs excels in spectacle and pacing.'</li></ul> |
| amusement | <ul><li>'fascinating, ludicrous, provocative and vainglorious'</li><li>'will amuse or entertain them'</li><li>'4ever has the same sledgehammer appeal as pokemon videos'</li></ul> |
| confusion | <ul><li>'muddled and derivative that few will bother thinking it all through'</li><li>'leaves vague impressions and a nasty aftertaste but little clear memory of its operational mechanics'</li><li>'trying to cope with the mysterious and brutal nature of adults'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.4645 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("jovemexausto/setfit-xlmr-emotions")
# Run inference
preds = model("is flat.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:-------|:----|
| Word count | 1 | 8.0486 | 31 |
| Label | Training Sample Count |
|:---------------|:----------------------|
| neutral | 12 |
| admiration | 12 |
| amusement | 12 |
| anger | 12 |
| confusion | 12 |
| disappointment | 12 |
| discomfort | 12 |
| excitement | 12 |
| frustration | 12 |
| joy | 12 |
| optimism | 12 |
| sadness | 12 |
### Training Hyperparameters
- batch_size: (72, 72)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- l2_weight: 0.01
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0038 | 1 | 0.2344 | - |
| 0.1894 | 50 | 0.1877 | - |
| 0.3788 | 100 | 0.0875 | - |
| 0.5682 | 150 | 0.0436 | - |
| 0.7576 | 200 | 0.0259 | - |
| 0.9470 | 250 | 0.0196 | - |
| 1.0 | 264 | - | 0.2041 |
### Framework Versions
- Python: 3.11.13
- SetFit: 1.1.2
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.6.0+cu124
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
jules-ai/DualResidualNetworks
|
jules-ai
| 2025-06-14T06:26:03Z | 0 | 0 | null |
[
"onnx",
"region:us"
] | null | 2025-06-14T05:58:50Z |
This is an onnx copy of https://github.com/liu-vis/DualResidualNetworks/
pre-trained model: trainedmodels/RainDrop/DuRN_S_P_model.pt
---
license: mit
language:
- zh
- en
- fr
---
|
netaicsco/v1.1_classifier
|
netaicsco
| 2025-06-14T06:24:38Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-14T06:24:24Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: v1.1_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# v1.1_classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 61 | 0.1325 | 0.9672 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
Rethish24/Accidents
|
Rethish24
| 2025-06-14T06:22:13Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-06-14T06:22:12Z |
---
license: apache-2.0
---
|
gradientrouting-spar/vertical_2_proxy_ntrain_25_ntrig_9_foods_seed_1_seed_25_seed_2_20250614_060744
|
gradientrouting-spar
| 2025-06-14T06:16:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T06:16:52Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
geetu040/deepseek-vl-7b-chat
|
geetu040
| 2025-06-14T06:16:38Z | 38 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deepseek_vl_hybrid",
"text2text-generation",
"muiltimodal",
"text-to-image",
"unified-model",
"image-text-to-text",
"conversational",
"arxiv:2403.05525",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-12T08:54:23Z |
---
library_name: transformers
license: other
license_name: deepseek
license_link: LICENSE
tags:
- muiltimodal
- text-to-image
- unified-model
pipeline_tag: image-text-to-text
---
# DeepSeek-VL: Towards Real-World Vision-LanguageUnderstanding

This is the transformers version of Deepseek-VL-Hybrid, a foundation model for Visual Language Modeling.
## Table of Contents
- [DeepSeek-VL: Towards Real-World Vision-LanguageUnderstanding](#deepseek-vl-towards-real-world-vision-languageunderstanding)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [Model Sources](#model-sources)
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
- [Training Details](#training-details)
- [Training Data](#training-data)
- [Training Pipeline](#training-pipeline)
- [Training Hyperparameters](#training-hyperparameters)
- [Evaluation](#evaluation)
- [Citation](#citation)
- [Model Card Authors](#model-card-authors)
## Model Details
[Deepseek-VL-Hybrid](https://arxiv.org/abs/2403.05525) was introduced by the DeepSeek AI team. It is a vision-language model (VLM) designed to process both text and images for generating contextually relevant responses. The model leverages LLaMA as its text encoder, while SigLip is used for encoding low-resolution images and SAM (Segment Anything Model) is incorporated to handle high-resolution image encoding, enhancing the model’s ability to process fine-grained visual details. Deepseek-VL-Hybrid is a variant of Deepseek-VL that uses SAM (Segment Anything Model) to handle high-resolution image encoding.
The abstract from the paper is the following:
> We present DeepSeek-VL, an open-source Vision-Language (VL) Model designed for real-world vision and language understanding applications. Our approach is structured around three key dimensions: We strive to ensure our data is diverse, scalable, and extensively covers real-world scenarios including web screenshots, PDFs, OCR, charts, and knowledge-based content, aiming for a comprehensive representation of practical contexts. Further, we create a use case taxonomy from real user scenarios and construct an instruction tuning dataset accordingly. The fine-tuning with this dataset substantially improves the model's user experience in practical applications. Considering efficiency and the demands of most real-world scenarios, DeepSeek-VL incorporates a hybrid vision encoder that efficiently processes high-resolution images (1024 x 1024), while maintaining a relatively low computational overhead. This design choice ensures the model's ability to capture critical semantic and detailed information across various visual tasks. We posit that a proficient Vision-Language Model should, foremost, possess strong language abilities. To ensure the preservation of LLM capabilities during pretraining, we investigate an effective VL pretraining strategy by integrating LLM training from the beginning and carefully managing the competitive dynamics observed between vision and language modalities. The DeepSeek-VL family (both 1.3B and 7B models) showcases superior user experiences as a vision-language chatbot in real-world applications, achieving state-of-the-art or competitive performance across a wide range of visual-language benchmarks at the same model size while maintaining robust performance on language-centric benchmarks. We have made both 1.3B and 7B models publicly accessible to foster innovations based on this foundation model.
This is the model card of a 🤗 [transformers](https://huggingface.co/docs/transformers/index) model that has been pushed on the Hub.
- **Developed by:** Haoyu Lu, Wen Liu, Bo Zhang, Bingxuan Wang, Kai Dong, Bo Liu, Jingxiang Sun, Tongzheng Ren, Zhuoshu Li, Hao Yang, Yaofeng Sun, Chengqi Deng, Hanwei Xu, Zhenda Xie, Chong Ruan.
- **Model type:** [Deepseek-VL-Hybrid](https://huggingface.co/docs/transformers/main/en/model_doc/deepseek_vl_hybrid)
- **License:** deepseek
### Model Sources
<!-- Provide the basic links for the model. -->
- **HF Docs:** [Deepseek-VL-Hybrid](https://huggingface.co/docs/transformers/main/en/model_doc/deepseek_vl_hybrid)
- **Repository:** https://github.com/deepseek-ai/DeepSeek-VL
- **Paper:** https://arxiv.org/abs/2403.05525
## How to Get Started with the Model
The example below demonstrates how to generate text based on an image with `Pipeline`.
```py
import torch
from transformers import pipeline
pipe = pipeline(
task="image-text-to-text",
model="deepseek-community/deepseek-vl-7b-chat",
device=0,
torch_dtype=torch.float16
)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg",
},
{ "type": "text", "text": "Describe this image."},
]
}
]
pipe(text=messages, max_new_tokens=20, return_full_text=False)
```
Generate text based on an image with `AutoModel`.
```py
import torch
from transformers import DeepseekVLHybridForConditionalGeneration, AutoProcessor
model = DeepseekVLHybridForConditionalGeneration.from_pretrained(
"deepseek-community/deepseek-vl-7b-chat",
torch_dtype=torch.float16,
device_map="auto",
attn_implementation="sdpa"
)
processor = AutoProcessor.from_pretrained("deepseek-community/deepseek-vl-7b-chat")
messages = [
{
"role":"user",
"content":[
{
"type":"image",
"url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
},
{
"type":"text",
"text":"Describe this image."
}
]
}
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
).to(model.device, dtype=model.dtype)
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](https://huggingface.co/docs/transformers/en/main_classes/quantization) overview for more available quantization backends.
The example below uses [TorchAo](https://huggingface.co/docs/transformers/en/main_classes/quantization#transformers.TorchAoConfig) to only quantize the weights to int4.
```py
import torch
from transformers import TorchAoConfig, DeepseekVLHybridForConditionalGeneration, AutoProcessor
quantization_config = TorchAoConfig(
"int4_weight_only",
group_size=128
)
model = DeepseekVLHybridForConditionalGeneration.from_pretrained(
"deepseek-community/deepseek-vl-7b-chat",
torch_dtype=torch.bfloat16,
device_map="auto",
quantization_config=quantization_config
)
```
Do inference with multiple images in a single conversation.
```py
import torch
from transformers import DeepseekVLHybridForConditionalGeneration, AutoProcessor
model = DeepseekVLHybridForConditionalGeneration.from_pretrained(
"deepseek-community/deepseek-vl-7b-chat",
torch_dtype=torch.float16,
device_map="auto",
attn_implementation="sdpa"
)
processor = AutoProcessor.from_pretrained("deepseek-community/deepseek-vl-7b-chat")
messages = [
[
{
"role": "user",
"content": [
{"type": "text", "text": "What’s the difference between"},
{"type": "image", "url": "http://images.cocodataset.org/val2017/000000039769.jpg"},
{"type": "text", "text": " and "},
{"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"}
]
}
],
[
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.jpg"},
{"type": "text", "text": "What do you see in this image?"}
]
}
]
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
padding=True,
truncation=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
).to(model.device, dtype=model.dtype)
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The Deepseek-VL-Hybrid model was trained on the following datasets:

### Training Pipeline
Training pipelines consist of three stages.
- Stage 1 involves training the Vision-Language (VL) adaptor while keeping the hybrid vision encoder and language model fixed.
- Stage 2 is the crucial part of the joint vision and language pretraining, where both VL adaptor and language model are trainable.
- Stage 3 is the supervised fine-tuning phase, during which the low-resolution vision encoder SigLIP-L, VL adaptor, and language model will be trained

### Training Hyperparameters

## Evaluation

## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@misc{lu2024deepseekvlrealworldvisionlanguageunderstanding,
title={DeepSeek-VL: Towards Real-World Vision-Language Understanding},
author={Haoyu Lu and Wen Liu and Bo Zhang and Bingxuan Wang and Kai Dong and Bo Liu and Jingxiang Sun and Tongzheng Ren and Zhuoshu Li and Hao Yang and Yaofeng Sun and Chengqi Deng and Hanwei Xu and Zhenda Xie and Chong Ruan},
year={2024},
eprint={2403.05525},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2403.05525},
}
```
## Model Card Authors
[Armaghan Shakir (geetu040)](https://github.com/geetu040)
|
geetu040/deepseek-vl-7b-base
|
geetu040
| 2025-06-14T06:15:44Z | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deepseek_vl_hybrid",
"text2text-generation",
"muiltimodal",
"text-to-image",
"unified-model",
"image-text-to-text",
"conversational",
"arxiv:2403.05525",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-06-12T14:22:10Z |
---
library_name: transformers
license: other
license_name: deepseek
license_link: LICENSE
tags:
- muiltimodal
- text-to-image
- unified-model
pipeline_tag: image-text-to-text
---
# DeepSeek-VL: Towards Real-World Vision-LanguageUnderstanding

This is the transformers version of Deepseek-VL-Hybrid, a foundation model for Visual Language Modeling.
## Table of Contents
- [DeepSeek-VL: Towards Real-World Vision-LanguageUnderstanding](#deepseek-vl-towards-real-world-vision-languageunderstanding)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [Model Sources](#model-sources)
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
- [Training Details](#training-details)
- [Training Data](#training-data)
- [Training Pipeline](#training-pipeline)
- [Training Hyperparameters](#training-hyperparameters)
- [Evaluation](#evaluation)
- [Citation](#citation)
- [Model Card Authors](#model-card-authors)
## Model Details
[Deepseek-VL-Hybrid](https://arxiv.org/abs/2403.05525) was introduced by the DeepSeek AI team. It is a vision-language model (VLM) designed to process both text and images for generating contextually relevant responses. The model leverages LLaMA as its text encoder, while SigLip is used for encoding low-resolution images and SAM (Segment Anything Model) is incorporated to handle high-resolution image encoding, enhancing the model’s ability to process fine-grained visual details. Deepseek-VL-Hybrid is a variant of Deepseek-VL that uses SAM (Segment Anything Model) to handle high-resolution image encoding.
The abstract from the paper is the following:
> We present DeepSeek-VL, an open-source Vision-Language (VL) Model designed for real-world vision and language understanding applications. Our approach is structured around three key dimensions: We strive to ensure our data is diverse, scalable, and extensively covers real-world scenarios including web screenshots, PDFs, OCR, charts, and knowledge-based content, aiming for a comprehensive representation of practical contexts. Further, we create a use case taxonomy from real user scenarios and construct an instruction tuning dataset accordingly. The fine-tuning with this dataset substantially improves the model's user experience in practical applications. Considering efficiency and the demands of most real-world scenarios, DeepSeek-VL incorporates a hybrid vision encoder that efficiently processes high-resolution images (1024 x 1024), while maintaining a relatively low computational overhead. This design choice ensures the model's ability to capture critical semantic and detailed information across various visual tasks. We posit that a proficient Vision-Language Model should, foremost, possess strong language abilities. To ensure the preservation of LLM capabilities during pretraining, we investigate an effective VL pretraining strategy by integrating LLM training from the beginning and carefully managing the competitive dynamics observed between vision and language modalities. The DeepSeek-VL family (both 1.3B and 7B models) showcases superior user experiences as a vision-language chatbot in real-world applications, achieving state-of-the-art or competitive performance across a wide range of visual-language benchmarks at the same model size while maintaining robust performance on language-centric benchmarks. We have made both 1.3B and 7B models publicly accessible to foster innovations based on this foundation model.
This is the model card of a 🤗 [transformers](https://huggingface.co/docs/transformers/index) model that has been pushed on the Hub.
- **Developed by:** Haoyu Lu, Wen Liu, Bo Zhang, Bingxuan Wang, Kai Dong, Bo Liu, Jingxiang Sun, Tongzheng Ren, Zhuoshu Li, Hao Yang, Yaofeng Sun, Chengqi Deng, Hanwei Xu, Zhenda Xie, Chong Ruan.
- **Model type:** [Deepseek-VL-Hybrid](https://huggingface.co/docs/transformers/main/en/model_doc/deepseek_vl_hybrid)
- **License:** deepseek
### Model Sources
<!-- Provide the basic links for the model. -->
- **HF Docs:** [Deepseek-VL-Hybrid](https://huggingface.co/docs/transformers/main/en/model_doc/deepseek_vl_hybrid)
- **Repository:** https://github.com/deepseek-ai/DeepSeek-VL
- **Paper:** https://arxiv.org/abs/2403.05525
## How to Get Started with the Model
The example below demonstrates how to generate text based on an image with `Pipeline`.
```py
import torch
from transformers import pipeline
pipe = pipeline(
task="image-text-to-text",
model="deepseek-community/deepseek-vl-7b-base",
device=0,
torch_dtype=torch.float16
)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg",
},
{ "type": "text", "text": "Describe this image."},
]
}
]
pipe(text=messages, max_new_tokens=20, return_full_text=False)
```
Generate text based on an image with `AutoModel`.
```py
import torch
from transformers import DeepseekVLHybridForConditionalGeneration, AutoProcessor
model = DeepseekVLHybridForConditionalGeneration.from_pretrained(
"deepseek-community/deepseek-vl-7b-base",
torch_dtype=torch.float16,
device_map="auto",
attn_implementation="sdpa"
)
processor = AutoProcessor.from_pretrained("deepseek-community/deepseek-vl-7b-base")
messages = [
{
"role":"user",
"content":[
{
"type":"image",
"url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
},
{
"type":"text",
"text":"Describe this image."
}
]
}
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
).to(model.device, dtype=model.dtype)
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](https://huggingface.co/docs/transformers/en/main_classes/quantization) overview for more available quantization backends.
The example below uses [TorchAo](https://huggingface.co/docs/transformers/en/main_classes/quantization#transformers.TorchAoConfig) to only quantize the weights to int4.
```py
import torch
from transformers import TorchAoConfig, DeepseekVLHybridForConditionalGeneration, AutoProcessor
quantization_config = TorchAoConfig(
"int4_weight_only",
group_size=128
)
model = DeepseekVLHybridForConditionalGeneration.from_pretrained(
"deepseek-community/deepseek-vl-7b-base",
torch_dtype=torch.bfloat16,
device_map="auto",
quantization_config=quantization_config
)
```
Do inference with multiple images in a single conversation.
```py
import torch
from transformers import DeepseekVLHybridForConditionalGeneration, AutoProcessor
model = DeepseekVLHybridForConditionalGeneration.from_pretrained(
"deepseek-community/deepseek-vl-7b-base",
torch_dtype=torch.float16,
device_map="auto",
attn_implementation="sdpa"
)
processor = AutoProcessor.from_pretrained("deepseek-community/deepseek-vl-7b-base")
messages = [
[
{
"role": "user",
"content": [
{"type": "text", "text": "What’s the difference between"},
{"type": "image", "url": "http://images.cocodataset.org/val2017/000000039769.jpg"},
{"type": "text", "text": " and "},
{"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"}
]
}
],
[
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.jpg"},
{"type": "text", "text": "What do you see in this image?"}
]
}
]
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
padding=True,
truncation=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
).to(model.device, dtype=model.dtype)
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The Deepseek-VL-Hybrid model was trained on the following datasets:

### Training Pipeline
Training pipelines consist of three stages.
- Stage 1 involves training the Vision-Language (VL) adaptor while keeping the hybrid vision encoder and language model fixed.
- Stage 2 is the crucial part of the joint vision and language pretraining, where both VL adaptor and language model are trainable.
- Stage 3 is the supervised fine-tuning phase, during which the low-resolution vision encoder SigLIP-L, VL adaptor, and language model will be trained

### Training Hyperparameters

## Evaluation

## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```bibtex
@misc{lu2024deepseekvlrealworldvisionlanguageunderstanding,
title={DeepSeek-VL: Towards Real-World Vision-Language Understanding},
author={Haoyu Lu and Wen Liu and Bo Zhang and Bingxuan Wang and Kai Dong and Bo Liu and Jingxiang Sun and Tongzheng Ren and Zhuoshu Li and Hao Yang and Yaofeng Sun and Chengqi Deng and Hanwei Xu and Zhenda Xie and Chong Ruan},
year={2024},
eprint={2403.05525},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2403.05525},
}
```
## Model Card Authors
[Armaghan Shakir (geetu040)](https://github.com/geetu040)
|
dgambettaphd/M_llm2_run1_gen2_WXS_doc1000_synt64_lr1e-04_acm_MPP
|
dgambettaphd
| 2025-06-14T06:09:49Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T06:09:36Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gradientrouting-spar/vertical_2_proxy_ntrain_25_ntrig_9_foods_seed_1_seed_25_20250614_055818
|
gradientrouting-spar
| 2025-06-14T06:07:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T06:07:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
leonweber/bge-base-en-v1.5-futo-swipe-base
|
leonweber
| 2025-06-14T06:05:45Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:9395500",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"dataset:leonweber/swipe",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-06-14T06:04:07Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:9395500
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-base-en-v1.5
widget:
- source_sentence: The letter was later published on <start> RUUIIIJKJGBFBHHJGRCZAQSRRFRFVW
<end> B.
sentences:
- Richard
- alpine
- began
- source_sentence: The letter was later published on <start> TFYUIIIIJHBFBVNJHRCDSAQFRREFFE
<end> B.
sentences:
- com.
- to
- Richard
- source_sentence: Other names for the island are Good Hope island and Tin Can <start>
UYYDEDDVGIMLKVXSSZAAXFMMMMHFDD <end>
sentences:
- island.
- town.
- Dressen,
- source_sentence: Other <start> JMMNBDSASXNMMMMMJMYERFSEEEDDDS <end> for the island
are Good Hope island and Tin Can island.
sentences:
- names
- B.
- may
- source_sentence: He was elected as a Federalist <start> TUUYUIOIOOLLKIIIOLLOPPLOOIOOLL
<end> the
sentences:
- island
- to
- Resource
datasets:
- leonweber/swipe
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on BAAI/bge-base-en-v1.5
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.32956199672004277
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.8430227201533104
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9579686377121377
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9806519375702519
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.32956199672004277
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.28100757338443677
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19159372754242757
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.0980651937570252
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.32956199672004277
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.8430227201533104
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9579686377121377
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9806519375702519
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.6849548486548703
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.5857373922437914
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.586624547268568
name: Cosine Map@100
---
# SentenceTransformer based on BAAI/bge-base-en-v1.5
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the [swipe](https://huggingface.co/datasets/leonweber/swipe) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) <!-- at revision a5beb1e3e68b9ab74eb54cfd186867f64f240e1a -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [swipe](https://huggingface.co/datasets/leonweber/swipe)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("leonweber/bge-base-en-v1.5-futo-swipe-base")
# Run inference
sentences = [
'He was elected as a Federalist <start> TUUYUIOIOOLLKIIIOLLOPPLOOIOOLL <end> the',
'to',
'island',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 768
}
```
| Metric | Value |
|:--------------------|:----------|
| cosine_accuracy@1 | 0.3296 |
| cosine_accuracy@3 | 0.843 |
| cosine_accuracy@5 | 0.958 |
| cosine_accuracy@10 | 0.9807 |
| cosine_precision@1 | 0.3296 |
| cosine_precision@3 | 0.281 |
| cosine_precision@5 | 0.1916 |
| cosine_precision@10 | 0.0981 |
| cosine_recall@1 | 0.3296 |
| cosine_recall@3 | 0.843 |
| cosine_recall@5 | 0.958 |
| cosine_recall@10 | 0.9807 |
| **cosine_ndcg@10** | **0.685** |
| cosine_mrr@10 | 0.5857 |
| cosine_map@100 | 0.5866 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### swipe
* Dataset: [swipe](https://huggingface.co/datasets/leonweber/swipe) at [c414d23](https://huggingface.co/datasets/leonweber/swipe/tree/c414d2389c6bf7842ff046a4f16245af5499088a)
* Size: 9,395,500 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:-----------------------------------------------------------------------------------|:--------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 23 tokens</li><li>mean: 35.95 tokens</li><li>max: 49 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 3.21 tokens</li><li>max: 6 tokens</li></ul> |
* Samples:
| anchor | positive |
|:----------------------------------------------------------|:-----------------|
| <code><start> YGHGYYJHHHGRRERRERRDEERWWSWWER <end></code> | <code>The</code> |
| <code><start> GRHHHGYHBJYGGGDTRRRRRRFFEEEEDE <end></code> | <code>The</code> |
| <code><start> TTYHYJJMJJYHHYTRRFRRRRRTREEERW <end></code> | <code>The</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
384,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 1
- `learning_rate`: 2e-05
- `weight_decay`: 0.01
- `num_train_epochs`: 4
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `tf32`: False
- `load_best_model_at_end`: True
- `optim`: adamw_torch_fused
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 512
- `per_device_eval_batch_size`: 1
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.01
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: False
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 |
|:------:|:-----:|:-------------:|:----------------------:|
| 0.0005 | 10 | 37.8181 | - |
| 0.0011 | 20 | 37.8745 | - |
| 0.0016 | 30 | 37.8035 | - |
| 0.0022 | 40 | 37.9557 | - |
| 0.0027 | 50 | 37.6668 | - |
| 0.0033 | 60 | 37.6528 | - |
| 0.0038 | 70 | 37.7871 | - |
| 0.0044 | 80 | 37.7153 | - |
| 0.0049 | 90 | 37.7046 | - |
| 0.0054 | 100 | 37.3186 | - |
| 0.0060 | 110 | 37.5698 | - |
| 0.0065 | 120 | 37.4279 | - |
| 0.0071 | 130 | 37.3638 | - |
| 0.0076 | 140 | 37.2504 | - |
| 0.0082 | 150 | 37.1078 | - |
| 0.0087 | 160 | 37.078 | - |
| 0.0093 | 170 | 37.0259 | - |
| 0.0098 | 180 | 37.1121 | - |
| 0.0104 | 190 | 36.8604 | - |
| 0.0109 | 200 | 36.8374 | - |
| 0.0114 | 210 | 36.7636 | - |
| 0.0120 | 220 | 36.6886 | - |
| 0.0125 | 230 | 36.466 | - |
| 0.0131 | 240 | 36.4593 | - |
| 0.0136 | 250 | 36.3399 | - |
| 0.0142 | 260 | 36.2709 | - |
| 0.0147 | 270 | 35.9789 | - |
| 0.0153 | 280 | 36.0644 | - |
| 0.0158 | 290 | 35.8975 | - |
| 0.0163 | 300 | 35.7431 | - |
| 0.0169 | 310 | 35.7282 | - |
| 0.0174 | 320 | 35.5679 | - |
| 0.0180 | 330 | 35.3101 | - |
| 0.0185 | 340 | 35.3582 | - |
| 0.0191 | 350 | 34.8891 | - |
| 0.0196 | 360 | 34.8703 | - |
| 0.0202 | 370 | 34.603 | - |
| 0.0207 | 380 | 34.5237 | - |
| 0.0213 | 390 | 34.4029 | - |
| 0.0218 | 400 | 34.1716 | - |
| 0.0223 | 410 | 34.0418 | - |
| 0.0229 | 420 | 33.8187 | - |
| 0.0234 | 430 | 33.4881 | - |
| 0.0240 | 440 | 33.2518 | - |
| 0.0245 | 450 | 33.0589 | - |
| 0.0251 | 460 | 32.7484 | - |
| 0.0256 | 470 | 32.6334 | - |
| 0.0262 | 480 | 32.2908 | - |
| 0.0267 | 490 | 32.1029 | - |
| 0.0272 | 500 | 32.2155 | - |
| 0.0278 | 510 | 31.7613 | - |
| 0.0283 | 520 | 31.6764 | - |
| 0.0289 | 530 | 31.1879 | - |
| 0.0294 | 540 | 31.3286 | - |
| 0.0300 | 550 | 31.0805 | - |
| 0.0305 | 560 | 30.6826 | - |
| 0.0311 | 570 | 30.7174 | - |
| 0.0316 | 580 | 30.6208 | - |
| 0.0322 | 590 | 30.4357 | - |
| 0.0327 | 600 | 30.1871 | - |
| 0.0332 | 610 | 30.0212 | - |
| 0.0338 | 620 | 29.8329 | - |
| 0.0343 | 630 | 29.7786 | - |
| 0.0349 | 640 | 29.5745 | - |
| 0.0354 | 650 | 29.515 | - |
| 0.0360 | 660 | 29.1725 | - |
| 0.0365 | 670 | 29.1614 | - |
| 0.0371 | 680 | 28.9475 | - |
| 0.0376 | 690 | 29.0228 | - |
| 0.0381 | 700 | 28.6691 | - |
| 0.0387 | 710 | 28.5814 | - |
| 0.0392 | 720 | 28.7324 | - |
| 0.0398 | 730 | 28.3714 | - |
| 0.0403 | 740 | 28.2326 | - |
| 0.0409 | 750 | 28.1421 | - |
| 0.0414 | 760 | 27.9354 | - |
| 0.0420 | 770 | 28.1524 | - |
| 0.0425 | 780 | 27.6295 | - |
| 0.0430 | 790 | 27.5161 | - |
| 0.0436 | 800 | 27.406 | - |
| 0.0441 | 810 | 27.4231 | - |
| 0.0447 | 820 | 27.1062 | - |
| 0.0452 | 830 | 27.0715 | - |
| 0.0458 | 840 | 26.8898 | - |
| 0.0463 | 850 | 27.0174 | - |
| 0.0469 | 860 | 26.6067 | - |
| 0.0474 | 870 | 26.7257 | - |
| 0.0480 | 880 | 26.3953 | - |
| 0.0485 | 890 | 26.1575 | - |
| 0.0490 | 900 | 26.0499 | - |
| 0.0496 | 910 | 25.6495 | - |
| 0.0501 | 920 | 25.864 | - |
| 0.0507 | 930 | 25.435 | - |
| 0.0512 | 940 | 25.2066 | - |
| 0.0518 | 950 | 25.0906 | - |
| 0.0523 | 960 | 24.9172 | - |
| 0.0529 | 970 | 24.7465 | - |
| 0.0534 | 980 | 24.8902 | - |
| 0.0539 | 990 | 24.7753 | - |
| 0.0545 | 1000 | 24.7124 | 0.2097 |
| 0.0550 | 1010 | 24.3186 | - |
| 0.0556 | 1020 | 24.3938 | - |
| 0.0561 | 1030 | 24.038 | - |
| 0.0567 | 1040 | 24.0744 | - |
| 0.0572 | 1050 | 24.0155 | - |
| 0.0578 | 1060 | 23.8589 | - |
| 0.0583 | 1070 | 23.6756 | - |
| 0.0589 | 1080 | 23.6533 | - |
| 0.0594 | 1090 | 23.468 | - |
| 0.0599 | 1100 | 23.3495 | - |
| 0.0605 | 1110 | 23.4936 | - |
| 0.0610 | 1120 | 23.3127 | - |
| 0.0616 | 1130 | 23.2299 | - |
| 0.0621 | 1140 | 23.2313 | - |
| 0.0627 | 1150 | 23.1101 | - |
| 0.0632 | 1160 | 23.0918 | - |
| 0.0638 | 1170 | 22.7436 | - |
| 0.0643 | 1180 | 22.7822 | - |
| 0.0648 | 1190 | 22.5625 | - |
| 0.0654 | 1200 | 22.5818 | - |
| 0.0659 | 1210 | 22.5936 | - |
| 0.0665 | 1220 | 22.2601 | - |
| 0.0670 | 1230 | 22.1642 | - |
| 0.0676 | 1240 | 22.0338 | - |
| 0.0681 | 1250 | 21.9364 | - |
| 0.0687 | 1260 | 21.859 | - |
| 0.0692 | 1270 | 21.9355 | - |
| 0.0698 | 1280 | 21.9929 | - |
| 0.0703 | 1290 | 21.7032 | - |
| 0.0708 | 1300 | 21.6312 | - |
| 0.0714 | 1310 | 21.4789 | - |
| 0.0719 | 1320 | 21.5226 | - |
| 0.0725 | 1330 | 21.4519 | - |
| 0.0730 | 1340 | 21.2931 | - |
| 0.0736 | 1350 | 21.2494 | - |
| 0.0741 | 1360 | 20.8992 | - |
| 0.0747 | 1370 | 21.078 | - |
| 0.0752 | 1380 | 21.046 | - |
| 0.0757 | 1390 | 20.8872 | - |
| 0.0763 | 1400 | 21.1522 | - |
| 0.0768 | 1410 | 20.817 | - |
| 0.0774 | 1420 | 20.745 | - |
| 0.0779 | 1430 | 20.637 | - |
| 0.0785 | 1440 | 20.8115 | - |
| 0.0790 | 1450 | 20.6122 | - |
| 0.0796 | 1460 | 20.302 | - |
| 0.0801 | 1470 | 20.2999 | - |
| 0.0806 | 1480 | 20.2776 | - |
| 0.0812 | 1490 | 20.0495 | - |
| 0.0817 | 1500 | 20.1821 | - |
| 0.0823 | 1510 | 20.1442 | - |
| 0.0828 | 1520 | 19.8891 | - |
| 0.0834 | 1530 | 19.9199 | - |
| 0.0839 | 1540 | 19.9919 | - |
| 0.0845 | 1550 | 19.8546 | - |
| 0.0850 | 1560 | 19.6797 | - |
| 0.0856 | 1570 | 19.4633 | - |
| 0.0861 | 1580 | 19.3668 | - |
| 0.0866 | 1590 | 19.4685 | - |
| 0.0872 | 1600 | 19.361 | - |
| 0.0877 | 1610 | 19.3766 | - |
| 0.0883 | 1620 | 19.3747 | - |
| 0.0888 | 1630 | 19.1426 | - |
| 0.0894 | 1640 | 18.941 | - |
| 0.0899 | 1650 | 18.9774 | - |
| 0.0905 | 1660 | 19.0114 | - |
| 0.0910 | 1670 | 18.9169 | - |
| 0.0915 | 1680 | 18.9226 | - |
| 0.0921 | 1690 | 18.6147 | - |
| 0.0926 | 1700 | 18.798 | - |
| 0.0932 | 1710 | 18.7626 | - |
| 0.0937 | 1720 | 18.6193 | - |
| 0.0943 | 1730 | 18.6658 | - |
| 0.0948 | 1740 | 18.5239 | - |
| 0.0954 | 1750 | 18.5804 | - |
| 0.0959 | 1760 | 18.5209 | - |
| 0.0965 | 1770 | 18.5679 | - |
| 0.0970 | 1780 | 18.4326 | - |
| 0.0975 | 1790 | 18.3524 | - |
| 0.0981 | 1800 | 18.221 | - |
| 0.0986 | 1810 | 18.253 | - |
| 0.0992 | 1820 | 18.2384 | - |
| 0.0997 | 1830 | 17.9852 | - |
| 0.1003 | 1840 | 17.9989 | - |
| 0.1008 | 1850 | 17.9194 | - |
| 0.1014 | 1860 | 17.8724 | - |
| 0.1019 | 1870 | 17.7197 | - |
| 0.1024 | 1880 | 17.8126 | - |
| 0.1030 | 1890 | 17.6255 | - |
| 0.1035 | 1900 | 17.7062 | - |
| 0.1041 | 1910 | 17.637 | - |
| 0.1046 | 1920 | 17.4689 | - |
| 0.1052 | 1930 | 17.3041 | - |
| 0.1057 | 1940 | 17.3653 | - |
| 0.1063 | 1950 | 17.2203 | - |
| 0.1068 | 1960 | 17.3665 | - |
| 0.1074 | 1970 | 17.1465 | - |
| 0.1079 | 1980 | 17.0412 | - |
| 0.1084 | 1990 | 17.0027 | - |
| 0.1090 | 2000 | 16.9401 | 0.3222 |
| 0.1095 | 2010 | 17.0416 | - |
| 0.1101 | 2020 | 16.8523 | - |
| 0.1106 | 2030 | 16.8219 | - |
| 0.1112 | 2040 | 16.9312 | - |
| 0.1117 | 2050 | 16.703 | - |
| 0.1123 | 2060 | 16.6344 | - |
| 0.1128 | 2070 | 16.6566 | - |
| 0.1133 | 2080 | 16.6782 | - |
| 0.1139 | 2090 | 16.2857 | - |
| 0.1144 | 2100 | 16.4618 | - |
| 0.1150 | 2110 | 16.432 | - |
| 0.1155 | 2120 | 16.1995 | - |
| 0.1161 | 2130 | 16.3736 | - |
| 0.1166 | 2140 | 16.559 | - |
| 0.1172 | 2150 | 16.2638 | - |
| 0.1177 | 2160 | 16.2271 | - |
| 0.1182 | 2170 | 16.1321 | - |
| 0.1188 | 2180 | 16.3454 | - |
| 0.1193 | 2190 | 15.8481 | - |
| 0.1199 | 2200 | 16.2196 | - |
| 0.1204 | 2210 | 16.094 | - |
| 0.1210 | 2220 | 15.7328 | - |
| 0.1215 | 2230 | 15.8759 | - |
| 0.1221 | 2240 | 16.0089 | - |
| 0.1226 | 2250 | 15.8936 | - |
| 0.1232 | 2260 | 15.353 | - |
| 0.1237 | 2270 | 15.7039 | - |
| 0.1242 | 2280 | 15.767 | - |
| 0.1248 | 2290 | 15.5553 | - |
| 0.1253 | 2300 | 15.4876 | - |
| 0.1259 | 2310 | 15.3148 | - |
| 0.1264 | 2320 | 15.5997 | - |
| 0.1270 | 2330 | 15.4719 | - |
| 0.1275 | 2340 | 15.5018 | - |
| 0.1281 | 2350 | 15.378 | - |
| 0.1286 | 2360 | 15.2236 | - |
| 0.1291 | 2370 | 15.2016 | - |
| 0.1297 | 2380 | 15.3446 | - |
| 0.1302 | 2390 | 15.2973 | - |
| 0.1308 | 2400 | 15.0925 | - |
| 0.1313 | 2410 | 15.1319 | - |
| 0.1319 | 2420 | 14.8657 | - |
| 0.1324 | 2430 | 14.862 | - |
| 0.1330 | 2440 | 15.0495 | - |
| 0.1335 | 2450 | 14.8944 | - |
| 0.1341 | 2460 | 14.995 | - |
| 0.1346 | 2470 | 14.5805 | - |
| 0.1351 | 2480 | 14.644 | - |
| 0.1357 | 2490 | 14.7232 | - |
| 0.1362 | 2500 | 14.7204 | - |
| 0.1368 | 2510 | 14.6286 | - |
| 0.1373 | 2520 | 14.5178 | - |
| 0.1379 | 2530 | 14.6885 | - |
| 0.1384 | 2540 | 14.4152 | - |
| 0.1390 | 2550 | 14.1354 | - |
| 0.1395 | 2560 | 14.2781 | - |
| 0.1400 | 2570 | 14.4026 | - |
| 0.1406 | 2580 | 14.3296 | - |
| 0.1411 | 2590 | 14.2801 | - |
| 0.1417 | 2600 | 14.209 | - |
| 0.1422 | 2610 | 14.2758 | - |
| 0.1428 | 2620 | 14.4368 | - |
| 0.1433 | 2630 | 14.113 | - |
| 0.1439 | 2640 | 14.242 | - |
| 0.1444 | 2650 | 14.3542 | - |
| 0.1450 | 2660 | 14.083 | - |
| 0.1455 | 2670 | 14.0418 | - |
| 0.1460 | 2680 | 13.9935 | - |
| 0.1466 | 2690 | 14.0761 | - |
| 0.1471 | 2700 | 13.787 | - |
| 0.1477 | 2710 | 13.7753 | - |
| 0.1482 | 2720 | 14.0173 | - |
| 0.1488 | 2730 | 13.7809 | - |
| 0.1493 | 2740 | 13.8398 | - |
| 0.1499 | 2750 | 13.9652 | - |
| 0.1504 | 2760 | 13.6278 | - |
| 0.1509 | 2770 | 13.6596 | - |
| 0.1515 | 2780 | 13.6267 | - |
| 0.1520 | 2790 | 13.3586 | - |
| 0.1526 | 2800 | 13.6805 | - |
| 0.1531 | 2810 | 13.4935 | - |
| 0.1537 | 2820 | 13.476 | - |
| 0.1542 | 2830 | 13.5092 | - |
| 0.1548 | 2840 | 13.2769 | - |
| 0.1553 | 2850 | 13.4114 | - |
| 0.1558 | 2860 | 13.4811 | - |
| 0.1564 | 2870 | 13.4092 | - |
| 0.1569 | 2880 | 13.0544 | - |
| 0.1575 | 2890 | 13.2852 | - |
| 0.1580 | 2900 | 13.3842 | - |
| 0.1586 | 2910 | 13.1178 | - |
| 0.1591 | 2920 | 13.3505 | - |
| 0.1597 | 2930 | 13.0001 | - |
| 0.1602 | 2940 | 13.1411 | - |
| 0.1608 | 2950 | 12.9735 | - |
| 0.1613 | 2960 | 12.9896 | - |
| 0.1618 | 2970 | 12.8786 | - |
| 0.1624 | 2980 | 12.9254 | - |
| 0.1629 | 2990 | 13.0186 | - |
| 0.1635 | 3000 | 13.1217 | 0.3990 |
| 0.1640 | 3010 | 13.0903 | - |
| 0.1646 | 3020 | 12.9024 | - |
| 0.1651 | 3030 | 12.7806 | - |
| 0.1657 | 3040 | 12.7643 | - |
| 0.1662 | 3050 | 12.7904 | - |
| 0.1667 | 3060 | 12.7395 | - |
| 0.1673 | 3070 | 12.8053 | - |
| 0.1678 | 3080 | 12.8277 | - |
| 0.1684 | 3090 | 12.5015 | - |
| 0.1689 | 3100 | 12.575 | - |
| 0.1695 | 3110 | 12.57 | - |
| 0.1700 | 3120 | 12.6584 | - |
| 0.1706 | 3130 | 12.7004 | - |
| 0.1711 | 3140 | 12.6911 | - |
| 0.1717 | 3150 | 12.5721 | - |
| 0.1722 | 3160 | 12.3287 | - |
| 0.1727 | 3170 | 12.5441 | - |
| 0.1733 | 3180 | 12.1809 | - |
| 0.1738 | 3190 | 12.2524 | - |
| 0.1744 | 3200 | 12.5356 | - |
| 0.1749 | 3210 | 12.3589 | - |
| 0.1755 | 3220 | 12.4829 | - |
| 0.1760 | 3230 | 12.2492 | - |
| 0.1766 | 3240 | 12.0863 | - |
| 0.1771 | 3250 | 12.1623 | - |
| 0.1776 | 3260 | 12.1734 | - |
| 0.1782 | 3270 | 12.1741 | - |
| 0.1787 | 3280 | 12.3667 | - |
| 0.1793 | 3290 | 12.2058 | - |
| 0.1798 | 3300 | 12.2746 | - |
| 0.1804 | 3310 | 11.9618 | - |
| 0.1809 | 3320 | 12.0344 | - |
| 0.1815 | 3330 | 11.992 | - |
| 0.1820 | 3340 | 12.2198 | - |
| 0.1826 | 3350 | 12.0985 | - |
| 0.1831 | 3360 | 11.8062 | - |
| 0.1836 | 3370 | 11.8675 | - |
| 0.1842 | 3380 | 11.859 | - |
| 0.1847 | 3390 | 11.7576 | - |
| 0.1853 | 3400 | 11.8905 | - |
| 0.1858 | 3410 | 11.6389 | - |
| 0.1864 | 3420 | 11.8027 | - |
| 0.1869 | 3430 | 11.7756 | - |
| 0.1875 | 3440 | 11.6358 | - |
| 0.1880 | 3450 | 11.7427 | - |
| 0.1885 | 3460 | 11.8136 | - |
| 0.1891 | 3470 | 11.6959 | - |
| 0.1896 | 3480 | 11.5426 | - |
| 0.1902 | 3490 | 11.8107 | - |
| 0.1907 | 3500 | 11.3597 | - |
| 0.1913 | 3510 | 11.7475 | - |
| 0.1918 | 3520 | 11.5556 | - |
| 0.1924 | 3530 | 11.5755 | - |
| 0.1929 | 3540 | 11.6012 | - |
| 0.1934 | 3550 | 11.505 | - |
| 0.1940 | 3560 | 11.5187 | - |
| 0.1945 | 3570 | 11.4475 | - |
| 0.1951 | 3580 | 11.6305 | - |
| 0.1956 | 3590 | 11.5682 | - |
| 0.1962 | 3600 | 11.5038 | - |
| 0.1967 | 3610 | 11.5091 | - |
| 0.1973 | 3620 | 11.193 | - |
| 0.1978 | 3630 | 11.0165 | - |
| 0.1984 | 3640 | 11.384 | - |
| 0.1989 | 3650 | 11.1523 | - |
| 0.1994 | 3660 | 11.2871 | - |
| 0.2000 | 3670 | 11.4431 | - |
| 0.2005 | 3680 | 11.0969 | - |
| 0.2011 | 3690 | 11.245 | - |
| 0.2016 | 3700 | 11.3113 | - |
| 0.2022 | 3710 | 11.1758 | - |
| 0.2027 | 3720 | 11.2785 | - |
| 0.2033 | 3730 | 11.2426 | - |
| 0.2038 | 3740 | 10.9701 | - |
| 0.2043 | 3750 | 11.1651 | - |
| 0.2049 | 3760 | 10.9817 | - |
| 0.2054 | 3770 | 10.9238 | - |
| 0.2060 | 3780 | 11.0338 | - |
| 0.2065 | 3790 | 10.9678 | - |
| 0.2071 | 3800 | 11.032 | - |
| 0.2076 | 3810 | 10.8826 | - |
| 0.2082 | 3820 | 10.8979 | - |
| 0.2087 | 3830 | 10.8586 | - |
| 0.2093 | 3840 | 11.1999 | - |
| 0.2098 | 3850 | 10.9982 | - |
| 0.2103 | 3860 | 10.9347 | - |
| 0.2109 | 3870 | 10.7645 | - |
| 0.2114 | 3880 | 10.5737 | - |
| 0.2120 | 3890 | 11.0763 | - |
| 0.2125 | 3900 | 10.8267 | - |
| 0.2131 | 3910 | 10.6464 | - |
| 0.2136 | 3920 | 10.9326 | - |
| 0.2142 | 3930 | 10.6806 | - |
| 0.2147 | 3940 | 10.7643 | - |
| 0.2152 | 3950 | 10.7525 | - |
| 0.2158 | 3960 | 10.6173 | - |
| 0.2163 | 3970 | 10.6687 | - |
| 0.2169 | 3980 | 10.6221 | - |
| 0.2174 | 3990 | 10.7334 | - |
| 0.2180 | 4000 | 10.587 | 0.4663 |
| 0.2185 | 4010 | 10.4667 | - |
| 0.2191 | 4020 | 10.4951 | - |
| 0.2196 | 4030 | 10.3475 | - |
| 0.2202 | 4040 | 10.5452 | - |
| 0.2207 | 4050 | 10.4219 | - |
| 0.2212 | 4060 | 10.5699 | - |
| 0.2218 | 4070 | 10.4521 | - |
| 0.2223 | 4080 | 10.25 | - |
| 0.2229 | 4090 | 10.5196 | - |
| 0.2234 | 4100 | 10.5692 | - |
| 0.2240 | 4110 | 10.5547 | - |
| 0.2245 | 4120 | 10.4704 | - |
| 0.2251 | 4130 | 10.4747 | - |
| 0.2256 | 4140 | 10.3259 | - |
| 0.2261 | 4150 | 10.1793 | - |
| 0.2267 | 4160 | 10.0264 | - |
| 0.2272 | 4170 | 10.2229 | - |
| 0.2278 | 4180 | 10.2787 | - |
| 0.2283 | 4190 | 10.0919 | - |
| 0.2289 | 4200 | 10.3481 | - |
| 0.2294 | 4210 | 10.1026 | - |
| 0.2300 | 4220 | 10.1955 | - |
| 0.2305 | 4230 | 10.2242 | - |
| 0.2311 | 4240 | 10.1919 | - |
| 0.2316 | 4250 | 10.0783 | - |
| 0.2321 | 4260 | 10.0091 | - |
| 0.2327 | 4270 | 10.038 | - |
| 0.2332 | 4280 | 9.9944 | - |
| 0.2338 | 4290 | 10.1389 | - |
| 0.2343 | 4300 | 9.8452 | - |
| 0.2349 | 4310 | 10.2438 | - |
| 0.2354 | 4320 | 10.0797 | - |
| 0.2360 | 4330 | 9.8426 | - |
| 0.2365 | 4340 | 9.8026 | - |
| 0.2370 | 4350 | 10.0981 | - |
| 0.2376 | 4360 | 9.8736 | - |
| 0.2381 | 4370 | 10.0125 | - |
| 0.2387 | 4380 | 10.1819 | - |
| 0.2392 | 4390 | 10.0428 | - |
| 0.2398 | 4400 | 9.8783 | - |
| 0.2403 | 4410 | 9.9905 | - |
| 0.2409 | 4420 | 9.9323 | - |
| 0.2414 | 4430 | 9.8484 | - |
| 0.2419 | 4440 | 9.9143 | - |
| 0.2425 | 4450 | 9.9063 | - |
| 0.2430 | 4460 | 9.7743 | - |
| 0.2436 | 4470 | 9.9067 | - |
| 0.2441 | 4480 | 9.5666 | - |
| 0.2447 | 4490 | 9.8591 | - |
| 0.2452 | 4500 | 9.8433 | - |
| 0.2458 | 4510 | 9.8242 | - |
| 0.2463 | 4520 | 9.7202 | - |
| 0.2469 | 4530 | 9.9021 | - |
| 0.2474 | 4540 | 9.5058 | - |
| 0.2479 | 4550 | 9.4893 | - |
| 0.2485 | 4560 | 9.5828 | - |
| 0.2490 | 4570 | 9.5272 | - |
| 0.2496 | 4580 | 9.6918 | - |
| 0.2501 | 4590 | 9.6636 | - |
| 0.2507 | 4600 | 9.5673 | - |
| 0.2512 | 4610 | 9.5775 | - |
| 0.2518 | 4620 | 9.6096 | - |
| 0.2523 | 4630 | 9.5462 | - |
| 0.2528 | 4640 | 9.4097 | - |
| 0.2534 | 4650 | 9.5601 | - |
| 0.2539 | 4660 | 9.3779 | - |
| 0.2545 | 4670 | 9.7323 | - |
| 0.2550 | 4680 | 9.4321 | - |
| 0.2556 | 4690 | 9.4253 | - |
| 0.2561 | 4700 | 9.5004 | - |
| 0.2567 | 4710 | 9.2908 | - |
| 0.2572 | 4720 | 9.6241 | - |
| 0.2578 | 4730 | 9.5288 | - |
| 0.2583 | 4740 | 9.5148 | - |
| 0.2588 | 4750 | 9.4222 | - |
| 0.2594 | 4760 | 9.364 | - |
| 0.2599 | 4770 | 9.6274 | - |
| 0.2605 | 4780 | 9.3874 | - |
| 0.2610 | 4790 | 9.2975 | - |
| 0.2616 | 4800 | 9.5574 | - |
| 0.2621 | 4810 | 9.3932 | - |
| 0.2627 | 4820 | 9.2 | - |
| 0.2632 | 4830 | 9.2462 | - |
| 0.2637 | 4840 | 9.3827 | - |
| 0.2643 | 4850 | 9.3065 | - |
| 0.2648 | 4860 | 9.297 | - |
| 0.2654 | 4870 | 9.3514 | - |
| 0.2659 | 4880 | 9.1089 | - |
| 0.2665 | 4890 | 9.2683 | - |
| 0.2670 | 4900 | 9.1573 | - |
| 0.2676 | 4910 | 9.3698 | - |
| 0.2681 | 4920 | 9.107 | - |
| 0.2687 | 4930 | 9.161 | - |
| 0.2692 | 4940 | 9.3085 | - |
| 0.2697 | 4950 | 9.2082 | - |
| 0.2703 | 4960 | 9.2263 | - |
| 0.2708 | 4970 | 9.2001 | - |
| 0.2714 | 4980 | 9.1139 | - |
| 0.2719 | 4990 | 9.0359 | - |
| 0.2725 | 5000 | 9.1433 | 0.5189 |
| 0.2730 | 5010 | 9.0093 | - |
| 0.2736 | 5020 | 8.8841 | - |
| 0.2741 | 5030 | 9.2819 | - |
| 0.2746 | 5040 | 8.9757 | - |
| 0.2752 | 5050 | 8.8423 | - |
| 0.2757 | 5060 | 8.8019 | - |
| 0.2763 | 5070 | 9.1965 | - |
| 0.2768 | 5080 | 8.8017 | - |
| 0.2774 | 5090 | 9.0463 | - |
| 0.2779 | 5100 | 9.0535 | - |
| 0.2785 | 5110 | 8.9905 | - |
| 0.2790 | 5120 | 9.0294 | - |
| 0.2795 | 5130 | 8.8825 | - |
| 0.2801 | 5140 | 8.7088 | - |
| 0.2806 | 5150 | 9.0592 | - |
| 0.2812 | 5160 | 8.9496 | - |
| 0.2817 | 5170 | 8.8741 | - |
| 0.2823 | 5180 | 8.8794 | - |
| 0.2828 | 5190 | 8.7597 | - |
| 0.2834 | 5200 | 9.0287 | - |
| 0.2839 | 5210 | 9.1026 | - |
| 0.2845 | 5220 | 8.75 | - |
| 0.2850 | 5230 | 8.6854 | - |
| 0.2855 | 5240 | 8.9228 | - |
| 0.2861 | 5250 | 8.8268 | - |
| 0.2866 | 5260 | 8.9277 | - |
| 0.2872 | 5270 | 8.8219 | - |
| 0.2877 | 5280 | 9.0601 | - |
| 0.2883 | 5290 | 8.7254 | - |
| 0.2888 | 5300 | 8.6276 | - |
| 0.2894 | 5310 | 8.7474 | - |
| 0.2899 | 5320 | 8.7243 | - |
| 0.2904 | 5330 | 8.8803 | - |
| 0.2910 | 5340 | 8.9843 | - |
| 0.2915 | 5350 | 8.5482 | - |
| 0.2921 | 5360 | 8.8258 | - |
| 0.2926 | 5370 | 9.1142 | - |
| 0.2932 | 5380 | 8.8472 | - |
| 0.2937 | 5390 | 8.581 | - |
| 0.2943 | 5400 | 8.5982 | - |
| 0.2948 | 5410 | 8.6305 | - |
| 0.2954 | 5420 | 8.5684 | - |
| 0.2959 | 5430 | 8.5423 | - |
| 0.2964 | 5440 | 8.6768 | - |
| 0.2970 | 5450 | 8.6106 | - |
| 0.2975 | 5460 | 8.5047 | - |
| 0.2981 | 5470 | 8.587 | - |
| 0.2986 | 5480 | 8.631 | - |
| 0.2992 | 5490 | 8.8151 | - |
| 0.2997 | 5500 | 8.4413 | - |
| 0.3003 | 5510 | 8.7409 | - |
| 0.3008 | 5520 | 8.5096 | - |
| 0.3013 | 5530 | 8.7142 | - |
| 0.3019 | 5540 | 8.4151 | - |
| 0.3024 | 5550 | 8.6947 | - |
| 0.3030 | 5560 | 8.5041 | - |
| 0.3035 | 5570 | 8.899 | - |
| 0.3041 | 5580 | 8.532 | - |
| 0.3046 | 5590 | 8.6056 | - |
| 0.3052 | 5600 | 8.4354 | - |
| 0.3057 | 5610 | 8.5718 | - |
| 0.3063 | 5620 | 8.6022 | - |
| 0.3068 | 5630 | 8.4614 | - |
| 0.3073 | 5640 | 8.2581 | - |
| 0.3079 | 5650 | 8.3234 | - |
| 0.3084 | 5660 | 8.5661 | - |
| 0.3090 | 5670 | 8.4067 | - |
| 0.3095 | 5680 | 8.3092 | - |
| 0.3101 | 5690 | 8.2656 | - |
| 0.3106 | 5700 | 8.4569 | - |
| 0.3112 | 5710 | 8.3815 | - |
| 0.3117 | 5720 | 8.4306 | - |
| 0.3122 | 5730 | 8.3839 | - |
| 0.3128 | 5740 | 8.4978 | - |
| 0.3133 | 5750 | 8.4048 | - |
| 0.3139 | 5760 | 8.2521 | - |
| 0.3144 | 5770 | 8.3647 | - |
| 0.3150 | 5780 | 8.3846 | - |
| 0.3155 | 5790 | 8.5868 | - |
| 0.3161 | 5800 | 8.4786 | - |
| 0.3166 | 5810 | 7.9504 | - |
| 0.3171 | 5820 | 8.2553 | - |
| 0.3177 | 5830 | 8.4174 | - |
| 0.3182 | 5840 | 8.0255 | - |
| 0.3188 | 5850 | 8.168 | - |
| 0.3193 | 5860 | 8.2216 | - |
| 0.3199 | 5870 | 8.4475 | - |
| 0.3204 | 5880 | 8.2418 | - |
| 0.3210 | 5890 | 8.0329 | - |
| 0.3215 | 5900 | 8.3428 | - |
| 0.3221 | 5910 | 8.2779 | - |
| 0.3226 | 5920 | 8.4937 | - |
| 0.3231 | 5930 | 8.2414 | - |
| 0.3237 | 5940 | 8.2713 | - |
| 0.3242 | 5950 | 8.0578 | - |
| 0.3248 | 5960 | 8.169 | - |
| 0.3253 | 5970 | 8.1925 | - |
| 0.3259 | 5980 | 7.9748 | - |
| 0.3264 | 5990 | 8.0145 | - |
| 0.3270 | 6000 | 8.0637 | 0.5545 |
| 0.3275 | 6010 | 8.1314 | - |
| 0.3280 | 6020 | 8.1375 | - |
| 0.3286 | 6030 | 8.0222 | - |
| 0.3291 | 6040 | 8.1187 | - |
| 0.3297 | 6050 | 8.2737 | - |
| 0.3302 | 6060 | 8.073 | - |
| 0.3308 | 6070 | 8.1576 | - |
| 0.3313 | 6080 | 8.0771 | - |
| 0.3319 | 6090 | 8.0022 | - |
| 0.3324 | 6100 | 8.0468 | - |
| 0.3330 | 6110 | 8.0022 | - |
| 0.3335 | 6120 | 8.1386 | - |
| 0.3340 | 6130 | 8.0315 | - |
| 0.3346 | 6140 | 7.8186 | - |
| 0.3351 | 6150 | 8.0148 | - |
| 0.3357 | 6160 | 7.9355 | - |
| 0.3362 | 6170 | 8.0374 | - |
| 0.3368 | 6180 | 8.21 | - |
| 0.3373 | 6190 | 7.9549 | - |
| 0.3379 | 6200 | 8.0193 | - |
| 0.3384 | 6210 | 8.0635 | - |
| 0.3389 | 6220 | 7.8779 | - |
| 0.3395 | 6230 | 8.1862 | - |
| 0.3400 | 6240 | 7.9833 | - |
| 0.3406 | 6250 | 7.9432 | - |
| 0.3411 | 6260 | 8.2462 | - |
| 0.3417 | 6270 | 8.0684 | - |
| 0.3422 | 6280 | 7.7683 | - |
| 0.3428 | 6290 | 7.8694 | - |
| 0.3433 | 6300 | 7.868 | - |
| 0.3439 | 6310 | 7.8491 | - |
| 0.3444 | 6320 | 8.0289 | - |
| 0.3449 | 6330 | 7.9298 | - |
| 0.3455 | 6340 | 7.8687 | - |
| 0.3460 | 6350 | 7.873 | - |
| 0.3466 | 6360 | 7.8725 | - |
| 0.3471 | 6370 | 7.8868 | - |
| 0.3477 | 6380 | 8.093 | - |
| 0.3482 | 6390 | 7.7256 | - |
| 0.3488 | 6400 | 7.852 | - |
| 0.3493 | 6410 | 7.9471 | - |
| 0.3498 | 6420 | 7.8747 | - |
| 0.3504 | 6430 | 7.7718 | - |
| 0.3509 | 6440 | 7.8169 | - |
| 0.3515 | 6450 | 7.9372 | - |
| 0.3520 | 6460 | 7.8373 | - |
| 0.3526 | 6470 | 7.8499 | - |
| 0.3531 | 6480 | 8.12 | - |
| 0.3537 | 6490 | 7.9549 | - |
| 0.3542 | 6500 | 7.8763 | - |
| 0.3547 | 6510 | 7.9602 | - |
| 0.3553 | 6520 | 7.8786 | - |
| 0.3558 | 6530 | 7.9177 | - |
| 0.3564 | 6540 | 7.8027 | - |
| 0.3569 | 6550 | 7.8392 | - |
| 0.3575 | 6560 | 7.7276 | - |
| 0.3580 | 6570 | 7.834 | - |
| 0.3586 | 6580 | 7.8245 | - |
| 0.3591 | 6590 | 7.8485 | - |
| 0.3597 | 6600 | 7.7898 | - |
| 0.3602 | 6610 | 7.6126 | - |
| 0.3607 | 6620 | 8.0074 | - |
| 0.3613 | 6630 | 7.9823 | - |
| 0.3618 | 6640 | 7.8742 | - |
| 0.3624 | 6650 | 7.9112 | - |
| 0.3629 | 6660 | 7.7921 | - |
| 0.3635 | 6670 | 8.12 | - |
| 0.3640 | 6680 | 7.7717 | - |
| 0.3646 | 6690 | 7.8116 | - |
| 0.3651 | 6700 | 7.7519 | - |
| 0.3656 | 6710 | 7.8699 | - |
| 0.3662 | 6720 | 7.5581 | - |
| 0.3667 | 6730 | 7.6637 | - |
| 0.3673 | 6740 | 7.5483 | - |
| 0.3678 | 6750 | 7.279 | - |
| 0.3684 | 6760 | 7.6365 | - |
| 0.3689 | 6770 | 7.6407 | - |
| 0.3695 | 6780 | 7.6092 | - |
| 0.3700 | 6790 | 7.4519 | - |
| 0.3706 | 6800 | 7.8574 | - |
| 0.3711 | 6810 | 7.4808 | - |
| 0.3716 | 6820 | 7.4178 | - |
| 0.3722 | 6830 | 7.494 | - |
| 0.3727 | 6840 | 7.3804 | - |
| 0.3733 | 6850 | 7.5295 | - |
| 0.3738 | 6860 | 7.397 | - |
| 0.3744 | 6870 | 7.7016 | - |
| 0.3749 | 6880 | 7.6908 | - |
| 0.3755 | 6890 | 7.6588 | - |
| 0.3760 | 6900 | 7.5758 | - |
| 0.3765 | 6910 | 7.7869 | - |
| 0.3771 | 6920 | 7.8396 | - |
| 0.3776 | 6930 | 7.5773 | - |
| 0.3782 | 6940 | 7.7657 | - |
| 0.3787 | 6950 | 7.6229 | - |
| 0.3793 | 6960 | 7.4187 | - |
| 0.3798 | 6970 | 7.5334 | - |
| 0.3804 | 6980 | 7.6113 | - |
| 0.3809 | 6990 | 7.5478 | - |
| 0.3815 | 7000 | 7.4247 | 0.5855 |
| 0.3820 | 7010 | 7.6398 | - |
| 0.3825 | 7020 | 7.6764 | - |
| 0.3831 | 7030 | 7.3971 | - |
| 0.3836 | 7040 | 7.3966 | - |
| 0.3842 | 7050 | 7.7726 | - |
| 0.3847 | 7060 | 7.3924 | - |
| 0.3853 | 7070 | 7.5626 | - |
| 0.3858 | 7080 | 7.2671 | - |
| 0.3864 | 7090 | 7.5703 | - |
| 0.3869 | 7100 | 7.4947 | - |
| 0.3874 | 7110 | 7.6151 | - |
| 0.3880 | 7120 | 7.7078 | - |
| 0.3885 | 7130 | 7.4774 | - |
| 0.3891 | 7140 | 7.5246 | - |
| 0.3896 | 7150 | 7.601 | - |
| 0.3902 | 7160 | 7.3956 | - |
| 0.3907 | 7170 | 7.4564 | - |
| 0.3913 | 7180 | 7.4057 | - |
| 0.3918 | 7190 | 7.517 | - |
| 0.3923 | 7200 | 7.4479 | - |
| 0.3929 | 7210 | 7.1871 | - |
| 0.3934 | 7220 | 7.4242 | - |
| 0.3940 | 7230 | 7.3011 | - |
| 0.3945 | 7240 | 7.3513 | - |
| 0.3951 | 7250 | 7.2973 | - |
| 0.3956 | 7260 | 7.2259 | - |
| 0.3962 | 7270 | 7.5115 | - |
| 0.3967 | 7280 | 7.5955 | - |
| 0.3973 | 7290 | 7.3698 | - |
| 0.3978 | 7300 | 7.3241 | - |
| 0.3983 | 7310 | 7.2994 | - |
| 0.3989 | 7320 | 7.5834 | - |
| 0.3994 | 7330 | 7.3874 | - |
| 0.4000 | 7340 | 7.3561 | - |
| 0.4005 | 7350 | 7.4169 | - |
| 0.4011 | 7360 | 7.3035 | - |
| 0.4016 | 7370 | 7.3103 | - |
| 0.4022 | 7380 | 7.3635 | - |
| 0.4027 | 7390 | 7.3893 | - |
| 0.4032 | 7400 | 7.1862 | - |
| 0.4038 | 7410 | 7.4362 | - |
| 0.4043 | 7420 | 7.4014 | - |
| 0.4049 | 7430 | 7.447 | - |
| 0.4054 | 7440 | 7.3614 | - |
| 0.4060 | 7450 | 7.326 | - |
| 0.4065 | 7460 | 7.3865 | - |
| 0.4071 | 7470 | 7.3123 | - |
| 0.4076 | 7480 | 7.4273 | - |
| 0.4082 | 7490 | 7.2329 | - |
| 0.4087 | 7500 | 7.1662 | - |
| 0.4092 | 7510 | 7.302 | - |
| 0.4098 | 7520 | 7.2201 | - |
| 0.4103 | 7530 | 7.2469 | - |
| 0.4109 | 7540 | 7.0976 | - |
| 0.4114 | 7550 | 6.984 | - |
| 0.4120 | 7560 | 7.2592 | - |
| 0.4125 | 7570 | 7.1174 | - |
| 0.4131 | 7580 | 7.2971 | - |
| 0.4136 | 7590 | 7.2644 | - |
| 0.4141 | 7600 | 7.3936 | - |
| 0.4147 | 7610 | 7.2737 | - |
| 0.4152 | 7620 | 7.0088 | - |
| 0.4158 | 7630 | 7.3368 | - |
| 0.4163 | 7640 | 7.1193 | - |
| 0.4169 | 7650 | 7.2348 | - |
| 0.4174 | 7660 | 7.1396 | - |
| 0.4180 | 7670 | 7.2234 | - |
| 0.4185 | 7680 | 7.0031 | - |
| 0.4191 | 7690 | 7.1566 | - |
| 0.4196 | 7700 | 7.4112 | - |
| 0.4201 | 7710 | 7.3474 | - |
| 0.4207 | 7720 | 6.8691 | - |
| 0.4212 | 7730 | 7.4187 | - |
| 0.4218 | 7740 | 7.1173 | - |
| 0.4223 | 7750 | 7.2897 | - |
| 0.4229 | 7760 | 7.2176 | - |
| 0.4234 | 7770 | 7.2184 | - |
| 0.4240 | 7780 | 7.1082 | - |
| 0.4245 | 7790 | 7.2306 | - |
| 0.4250 | 7800 | 7.1772 | - |
| 0.4256 | 7810 | 7.2459 | - |
| 0.4261 | 7820 | 7.1515 | - |
| 0.4267 | 7830 | 7.4251 | - |
| 0.4272 | 7840 | 7.2604 | - |
| 0.4278 | 7850 | 7.0554 | - |
| 0.4283 | 7860 | 7.0932 | - |
| 0.4289 | 7870 | 7.2755 | - |
| 0.4294 | 7880 | 7.157 | - |
| 0.4299 | 7890 | 7.2364 | - |
| 0.4305 | 7900 | 7.0964 | - |
| 0.4310 | 7910 | 7.1475 | - |
| 0.4316 | 7920 | 7.0325 | - |
| 0.4321 | 7930 | 6.9628 | - |
| 0.4327 | 7940 | 7.1708 | - |
| 0.4332 | 7950 | 7.0569 | - |
| 0.4338 | 7960 | 7.026 | - |
| 0.4343 | 7970 | 7.4858 | - |
| 0.4349 | 7980 | 7.185 | - |
| 0.4354 | 7990 | 7.0794 | - |
| 0.4359 | 8000 | 7.1019 | 0.6150 |
| 0.4365 | 8010 | 7.2148 | - |
| 0.4370 | 8020 | 7.2053 | - |
| 0.4376 | 8030 | 6.9946 | - |
| 0.4381 | 8040 | 7.048 | - |
| 0.4387 | 8050 | 7.0108 | - |
| 0.4392 | 8060 | 6.8753 | - |
| 0.4398 | 8070 | 7.0634 | - |
| 0.4403 | 8080 | 7.0431 | - |
| 0.4408 | 8090 | 6.9478 | - |
| 0.4414 | 8100 | 7.0392 | - |
| 0.4419 | 8110 | 7.0292 | - |
| 0.4425 | 8120 | 7.0033 | - |
| 0.4430 | 8130 | 6.9582 | - |
| 0.4436 | 8140 | 7.2159 | - |
| 0.4441 | 8150 | 7.046 | - |
| 0.4447 | 8160 | 7.2072 | - |
| 0.4452 | 8170 | 6.8591 | - |
| 0.4458 | 8180 | 7.2448 | - |
| 0.4463 | 8190 | 7.0115 | - |
| 0.4468 | 8200 | 6.9839 | - |
| 0.4474 | 8210 | 7.2036 | - |
| 0.4479 | 8220 | 6.868 | - |
| 0.4485 | 8230 | 7.0157 | - |
| 0.4490 | 8240 | 7.3588 | - |
| 0.4496 | 8250 | 6.8814 | - |
| 0.4501 | 8260 | 6.8541 | - |
| 0.4507 | 8270 | 7.1133 | - |
| 0.4512 | 8280 | 6.9994 | - |
| 0.4517 | 8290 | 7.0126 | - |
| 0.4523 | 8300 | 6.7376 | - |
| 0.4528 | 8310 | 6.917 | - |
| 0.4534 | 8320 | 6.8532 | - |
| 0.4539 | 8330 | 6.9572 | - |
| 0.4545 | 8340 | 6.9115 | - |
| 0.4550 | 8350 | 7.0851 | - |
| 0.4556 | 8360 | 7.0494 | - |
| 0.4561 | 8370 | 6.9513 | - |
| 0.4567 | 8380 | 7.1227 | - |
| 0.4572 | 8390 | 6.9692 | - |
| 0.4577 | 8400 | 7.0586 | - |
| 0.4583 | 8410 | 6.8832 | - |
| 0.4588 | 8420 | 6.8156 | - |
| 0.4594 | 8430 | 6.9146 | - |
| 0.4599 | 8440 | 7.0535 | - |
| 0.4605 | 8450 | 6.9683 | - |
| 0.4610 | 8460 | 7.029 | - |
| 0.4616 | 8470 | 6.9019 | - |
| 0.4621 | 8480 | 7.0218 | - |
| 0.4626 | 8490 | 6.8704 | - |
| 0.4632 | 8500 | 6.7047 | - |
| 0.4637 | 8510 | 6.8137 | - |
| 0.4643 | 8520 | 7.0906 | - |
| 0.4648 | 8530 | 6.8498 | - |
| 0.4654 | 8540 | 6.8662 | - |
| 0.4659 | 8550 | 6.6864 | - |
| 0.4665 | 8560 | 7.0185 | - |
| 0.4670 | 8570 | 7.0033 | - |
| 0.4675 | 8580 | 6.7309 | - |
| 0.4681 | 8590 | 6.9582 | - |
| 0.4686 | 8600 | 6.7466 | - |
| 0.4692 | 8610 | 6.6997 | - |
| 0.4697 | 8620 | 6.7347 | - |
| 0.4703 | 8630 | 6.8066 | - |
| 0.4708 | 8640 | 7.0278 | - |
| 0.4714 | 8650 | 6.769 | - |
| 0.4719 | 8660 | 6.9595 | - |
| 0.4725 | 8670 | 6.8223 | - |
| 0.4730 | 8680 | 7.0896 | - |
| 0.4735 | 8690 | 6.9122 | - |
| 0.4741 | 8700 | 7.1398 | - |
| 0.4746 | 8710 | 7.0935 | - |
| 0.4752 | 8720 | 6.7757 | - |
| 0.4757 | 8730 | 6.6423 | - |
| 0.4763 | 8740 | 6.8884 | - |
| 0.4768 | 8750 | 6.9172 | - |
| 0.4774 | 8760 | 6.7943 | - |
| 0.4779 | 8770 | 7.0277 | - |
| 0.4784 | 8780 | 6.7292 | - |
| 0.4790 | 8790 | 6.7469 | - |
| 0.4795 | 8800 | 6.7128 | - |
| 0.4801 | 8810 | 6.8654 | - |
| 0.4806 | 8820 | 6.6928 | - |
| 0.4812 | 8830 | 6.778 | - |
| 0.4817 | 8840 | 6.7372 | - |
| 0.4823 | 8850 | 6.7857 | - |
| 0.4828 | 8860 | 6.9501 | - |
| 0.4834 | 8870 | 6.8405 | - |
| 0.4839 | 8880 | 6.9908 | - |
| 0.4844 | 8890 | 6.8878 | - |
| 0.4850 | 8900 | 6.9091 | - |
| 0.4855 | 8910 | 6.8963 | - |
| 0.4861 | 8920 | 6.9115 | - |
| 0.4866 | 8930 | 6.6794 | - |
| 0.4872 | 8940 | 6.7994 | - |
| 0.4877 | 8950 | 6.7999 | - |
| 0.4883 | 8960 | 6.8499 | - |
| 0.4888 | 8970 | 6.9071 | - |
| 0.4893 | 8980 | 6.8629 | - |
| 0.4899 | 8990 | 6.6508 | - |
| 0.4904 | 9000 | 6.8727 | 0.6288 |
| 0.4910 | 9010 | 6.8989 | - |
| 0.4915 | 9020 | 6.6113 | - |
| 0.4921 | 9030 | 6.6679 | - |
| 0.4926 | 9040 | 6.8878 | - |
| 0.4932 | 9050 | 6.7192 | - |
| 0.4937 | 9060 | 6.8181 | - |
| 0.4943 | 9070 | 6.8434 | - |
| 0.4948 | 9080 | 6.9917 | - |
| 0.4953 | 9090 | 6.8 | - |
| 0.4959 | 9100 | 6.8076 | - |
| 0.4964 | 9110 | 6.6266 | - |
| 0.4970 | 9120 | 6.9803 | - |
| 0.4975 | 9130 | 6.8403 | - |
| 0.4981 | 9140 | 6.7824 | - |
| 0.4986 | 9150 | 6.5306 | - |
| 0.4992 | 9160 | 6.8232 | - |
| 0.4997 | 9170 | 6.7601 | - |
| 0.5002 | 9180 | 6.6541 | - |
| 0.5008 | 9190 | 6.9916 | - |
| 0.5013 | 9200 | 6.575 | - |
| 0.5019 | 9210 | 6.6878 | - |
| 0.5024 | 9220 | 7.1536 | - |
| 0.5030 | 9230 | 6.761 | - |
| 0.5035 | 9240 | 6.6037 | - |
| 0.5041 | 9250 | 6.7576 | - |
| 0.5046 | 9260 | 6.5752 | - |
| 0.5051 | 9270 | 6.7783 | - |
| 0.5057 | 9280 | 6.8931 | - |
| 0.5062 | 9290 | 6.5774 | - |
| 0.5068 | 9300 | 6.834 | - |
| 0.5073 | 9310 | 6.7487 | - |
| 0.5079 | 9320 | 6.5814 | - |
| 0.5084 | 9330 | 6.7004 | - |
| 0.5090 | 9340 | 6.5135 | - |
| 0.5095 | 9350 | 6.5975 | - |
| 0.5101 | 9360 | 6.7673 | - |
| 0.5106 | 9370 | 6.4345 | - |
| 0.5111 | 9380 | 6.6254 | - |
| 0.5117 | 9390 | 6.5485 | - |
| 0.5122 | 9400 | 6.7307 | - |
| 0.5128 | 9410 | 6.7256 | - |
| 0.5133 | 9420 | 6.5529 | - |
| 0.5139 | 9430 | 6.5232 | - |
| 0.5144 | 9440 | 6.7098 | - |
| 0.5150 | 9450 | 6.6668 | - |
| 0.5155 | 9460 | 6.9211 | - |
| 0.5160 | 9470 | 6.7553 | - |
| 0.5166 | 9480 | 6.6039 | - |
| 0.5171 | 9490 | 6.7426 | - |
| 0.5177 | 9500 | 6.582 | - |
| 0.5182 | 9510 | 6.6898 | - |
| 0.5188 | 9520 | 6.5572 | - |
| 0.5193 | 9530 | 6.7734 | - |
| 0.5199 | 9540 | 6.733 | - |
| 0.5204 | 9550 | 6.7048 | - |
| 0.5210 | 9560 | 6.7537 | - |
| 0.5215 | 9570 | 6.7967 | - |
| 0.5220 | 9580 | 6.737 | - |
| 0.5226 | 9590 | 6.5266 | - |
| 0.5231 | 9600 | 6.5491 | - |
| 0.5237 | 9610 | 6.7679 | - |
| 0.5242 | 9620 | 6.6807 | - |
| 0.5248 | 9630 | 6.7138 | - |
| 0.5253 | 9640 | 6.6734 | - |
| 0.5259 | 9650 | 6.6708 | - |
| 0.5264 | 9660 | 6.5739 | - |
| 0.5269 | 9670 | 6.6036 | - |
| 0.5275 | 9680 | 6.6854 | - |
| 0.5280 | 9690 | 6.7118 | - |
| 0.5286 | 9700 | 6.7223 | - |
| 0.5291 | 9710 | 6.6224 | - |
| 0.5297 | 9720 | 6.6776 | - |
| 0.5302 | 9730 | 6.9282 | - |
| 0.5308 | 9740 | 6.4891 | - |
| 0.5313 | 9750 | 6.476 | - |
| 0.5319 | 9760 | 6.4303 | - |
| 0.5324 | 9770 | 6.7114 | - |
| 0.5329 | 9780 | 6.5834 | - |
| 0.5335 | 9790 | 6.5255 | - |
| 0.5340 | 9800 | 6.5797 | - |
| 0.5346 | 9810 | 6.8394 | - |
| 0.5351 | 9820 | 6.502 | - |
| 0.5357 | 9830 | 6.7724 | - |
| 0.5362 | 9840 | 6.4659 | - |
| 0.5368 | 9850 | 6.6357 | - |
| 0.5373 | 9860 | 6.5437 | - |
| 0.5378 | 9870 | 6.4246 | - |
| 0.5384 | 9880 | 6.6036 | - |
| 0.5389 | 9890 | 6.5691 | - |
| 0.5395 | 9900 | 6.526 | - |
| 0.5400 | 9910 | 6.7413 | - |
| 0.5406 | 9920 | 6.7491 | - |
| 0.5411 | 9930 | 6.562 | - |
| 0.5417 | 9940 | 6.7205 | - |
| 0.5422 | 9950 | 6.4153 | - |
| 0.5427 | 9960 | 6.6974 | - |
| 0.5433 | 9970 | 6.4841 | - |
| 0.5438 | 9980 | 6.4783 | - |
| 0.5444 | 9990 | 6.4136 | - |
| 0.5449 | 10000 | 6.6144 | 0.6482 |
| 0.5455 | 10010 | 6.436 | - |
| 0.5460 | 10020 | 6.6021 | - |
| 0.5466 | 10030 | 6.6303 | - |
| 0.5471 | 10040 | 6.5935 | - |
| 0.5477 | 10050 | 6.5152 | - |
| 0.5482 | 10060 | 6.7972 | - |
| 0.5487 | 10070 | 6.6123 | - |
| 0.5493 | 10080 | 6.4738 | - |
| 0.5498 | 10090 | 6.8646 | - |
| 0.5504 | 10100 | 6.31 | - |
| 0.5509 | 10110 | 6.6762 | - |
| 0.5515 | 10120 | 6.5441 | - |
| 0.5520 | 10130 | 6.7936 | - |
| 0.5526 | 10140 | 6.2516 | - |
| 0.5531 | 10150 | 6.538 | - |
| 0.5536 | 10160 | 6.5644 | - |
| 0.5542 | 10170 | 6.6617 | - |
| 0.5547 | 10180 | 6.5593 | - |
| 0.5553 | 10190 | 6.3959 | - |
| 0.5558 | 10200 | 6.4151 | - |
| 0.5564 | 10210 | 6.5655 | - |
| 0.5569 | 10220 | 6.4983 | - |
| 0.5575 | 10230 | 6.5985 | - |
| 0.5580 | 10240 | 6.4145 | - |
| 0.5586 | 10250 | 6.4404 | - |
| 0.5591 | 10260 | 6.466 | - |
| 0.5596 | 10270 | 6.6035 | - |
| 0.5602 | 10280 | 6.4327 | - |
| 0.5607 | 10290 | 6.5489 | - |
| 0.5613 | 10300 | 6.4944 | - |
| 0.5618 | 10310 | 6.4506 | - |
| 0.5624 | 10320 | 6.7145 | - |
| 0.5629 | 10330 | 6.4967 | - |
| 0.5635 | 10340 | 6.5552 | - |
| 0.5640 | 10350 | 6.4417 | - |
| 0.5645 | 10360 | 6.5186 | - |
| 0.5651 | 10370 | 6.4406 | - |
| 0.5656 | 10380 | 6.3066 | - |
| 0.5662 | 10390 | 6.4322 | - |
| 0.5667 | 10400 | 6.408 | - |
| 0.5673 | 10410 | 6.5148 | - |
| 0.5678 | 10420 | 6.3354 | - |
| 0.5684 | 10430 | 6.368 | - |
| 0.5689 | 10440 | 6.5477 | - |
| 0.5695 | 10450 | 6.3519 | - |
| 0.5700 | 10460 | 6.428 | - |
| 0.5705 | 10470 | 6.3813 | - |
| 0.5711 | 10480 | 6.569 | - |
| 0.5716 | 10490 | 6.2575 | - |
| 0.5722 | 10500 | 6.6243 | - |
| 0.5727 | 10510 | 6.4724 | - |
| 0.5733 | 10520 | 6.5045 | - |
| 0.5738 | 10530 | 6.5924 | - |
| 0.5744 | 10540 | 6.3322 | - |
| 0.5749 | 10550 | 6.8257 | - |
| 0.5754 | 10560 | 6.3839 | - |
| 0.5760 | 10570 | 6.603 | - |
| 0.5765 | 10580 | 6.1732 | - |
| 0.5771 | 10590 | 6.5801 | - |
| 0.5776 | 10600 | 6.5943 | - |
| 0.5782 | 10610 | 6.5283 | - |
| 0.5787 | 10620 | 6.3671 | - |
| 0.5793 | 10630 | 6.225 | - |
| 0.5798 | 10640 | 6.3644 | - |
| 0.5803 | 10650 | 6.3077 | - |
| 0.5809 | 10660 | 6.4649 | - |
| 0.5814 | 10670 | 6.5151 | - |
| 0.5820 | 10680 | 6.3601 | - |
| 0.5825 | 10690 | 6.4399 | - |
| 0.5831 | 10700 | 6.244 | - |
| 0.5836 | 10710 | 6.355 | - |
| 0.5842 | 10720 | 6.6028 | - |
| 0.5847 | 10730 | 6.4558 | - |
| 0.5853 | 10740 | 6.5265 | - |
| 0.5858 | 10750 | 6.6766 | - |
| 0.5863 | 10760 | 6.5447 | - |
| 0.5869 | 10770 | 6.2333 | - |
| 0.5874 | 10780 | 6.6056 | - |
| 0.5880 | 10790 | 6.5741 | - |
| 0.5885 | 10800 | 6.4497 | - |
| 0.5891 | 10810 | 6.4338 | - |
| 0.5896 | 10820 | 6.6348 | - |
| 0.5902 | 10830 | 6.1167 | - |
| 0.5907 | 10840 | 6.3471 | - |
| 0.5912 | 10850 | 6.3992 | - |
| 0.5918 | 10860 | 6.5264 | - |
| 0.5923 | 10870 | 6.312 | - |
| 0.5929 | 10880 | 6.3594 | - |
| 0.5934 | 10890 | 6.5949 | - |
| 0.5940 | 10900 | 6.354 | - |
| 0.5945 | 10910 | 6.3175 | - |
| 0.5951 | 10920 | 6.5246 | - |
| 0.5956 | 10930 | 6.3608 | - |
| 0.5962 | 10940 | 6.3516 | - |
| 0.5967 | 10950 | 6.5249 | - |
| 0.5972 | 10960 | 6.3785 | - |
| 0.5978 | 10970 | 6.3846 | - |
| 0.5983 | 10980 | 6.3188 | - |
| 0.5989 | 10990 | 6.5969 | - |
| 0.5994 | 11000 | 6.3704 | 0.6548 |
| 0.6000 | 11010 | 6.5205 | - |
| 0.6005 | 11020 | 6.1514 | - |
| 0.6011 | 11030 | 6.3779 | - |
| 0.6016 | 11040 | 6.3729 | - |
| 0.6021 | 11050 | 6.4551 | - |
| 0.6027 | 11060 | 6.5965 | - |
| 0.6032 | 11070 | 6.6712 | - |
| 0.6038 | 11080 | 6.4909 | - |
| 0.6043 | 11090 | 6.6071 | - |
| 0.6049 | 11100 | 6.3863 | - |
| 0.6054 | 11110 | 6.3169 | - |
| 0.6060 | 11120 | 6.4156 | - |
| 0.6065 | 11130 | 6.4848 | - |
| 0.6071 | 11140 | 6.486 | - |
| 0.6076 | 11150 | 6.4581 | - |
| 0.6081 | 11160 | 6.1455 | - |
| 0.6087 | 11170 | 6.5206 | - |
| 0.6092 | 11180 | 6.2987 | - |
| 0.6098 | 11190 | 6.4783 | - |
| 0.6103 | 11200 | 6.5321 | - |
| 0.6109 | 11210 | 6.4885 | - |
| 0.6114 | 11220 | 6.7327 | - |
| 0.6120 | 11230 | 6.4928 | - |
| 0.6125 | 11240 | 6.4383 | - |
| 0.6130 | 11250 | 6.4377 | - |
| 0.6136 | 11260 | 6.252 | - |
| 0.6141 | 11270 | 6.4382 | - |
| 0.6147 | 11280 | 6.4911 | - |
| 0.6152 | 11290 | 6.5029 | - |
| 0.6158 | 11300 | 6.4118 | - |
| 0.6163 | 11310 | 6.4459 | - |
| 0.6169 | 11320 | 6.278 | - |
| 0.6174 | 11330 | 6.2352 | - |
| 0.6179 | 11340 | 6.152 | - |
| 0.6185 | 11350 | 6.2943 | - |
| 0.6190 | 11360 | 6.5046 | - |
| 0.6196 | 11370 | 6.6336 | - |
| 0.6201 | 11380 | 6.0481 | - |
| 0.6207 | 11390 | 6.3265 | - |
| 0.6212 | 11400 | 6.2524 | - |
| 0.6218 | 11410 | 6.5976 | - |
| 0.6223 | 11420 | 6.2147 | - |
| 0.6229 | 11430 | 6.421 | - |
| 0.6234 | 11440 | 6.3618 | - |
| 0.6239 | 11450 | 6.3011 | - |
| 0.6245 | 11460 | 6.5946 | - |
| 0.6250 | 11470 | 6.3556 | - |
| 0.6256 | 11480 | 6.6163 | - |
| 0.6261 | 11490 | 6.2441 | - |
| 0.6267 | 11500 | 6.3202 | - |
| 0.6272 | 11510 | 6.3176 | - |
| 0.6278 | 11520 | 6.2653 | - |
| 0.6283 | 11530 | 6.2858 | - |
| 0.6288 | 11540 | 6.1222 | - |
| 0.6294 | 11550 | 6.3453 | - |
| 0.6299 | 11560 | 6.2064 | - |
| 0.6305 | 11570 | 6.2128 | - |
| 0.6310 | 11580 | 6.2113 | - |
| 0.6316 | 11590 | 6.411 | - |
| 0.6321 | 11600 | 6.4686 | - |
| 0.6327 | 11610 | 6.0459 | - |
| 0.6332 | 11620 | 6.3097 | - |
| 0.6338 | 11630 | 6.4254 | - |
| 0.6343 | 11640 | 6.2177 | - |
| 0.6348 | 11650 | 6.3141 | - |
| 0.6354 | 11660 | 6.4984 | - |
| 0.6359 | 11670 | 6.2343 | - |
| 0.6365 | 11680 | 6.3383 | - |
| 0.6370 | 11690 | 6.3003 | - |
| 0.6376 | 11700 | 6.4467 | - |
| 0.6381 | 11710 | 6.2376 | - |
| 0.6387 | 11720 | 6.308 | - |
| 0.6392 | 11730 | 6.4671 | - |
| 0.6397 | 11740 | 6.1625 | - |
| 0.6403 | 11750 | 6.1407 | - |
| 0.6408 | 11760 | 6.3302 | - |
| 0.6414 | 11770 | 6.2732 | - |
| 0.6419 | 11780 | 5.9517 | - |
| 0.6425 | 11790 | 6.4423 | - |
| 0.6430 | 11800 | 6.1186 | - |
| 0.6436 | 11810 | 6.0881 | - |
| 0.6441 | 11820 | 6.3203 | - |
| 0.6447 | 11830 | 6.2081 | - |
| 0.6452 | 11840 | 6.1323 | - |
| 0.6457 | 11850 | 6.0204 | - |
| 0.6463 | 11860 | 6.5887 | - |
| 0.6468 | 11870 | 6.2899 | - |
| 0.6474 | 11880 | 6.3255 | - |
| 0.6479 | 11890 | 6.4173 | - |
| 0.6485 | 11900 | 6.4165 | - |
| 0.6490 | 11910 | 6.2384 | - |
| 0.6496 | 11920 | 6.4318 | - |
| 0.6501 | 11930 | 6.2372 | - |
| 0.6506 | 11940 | 6.1017 | - |
| 0.6512 | 11950 | 6.4452 | - |
| 0.6517 | 11960 | 6.4424 | - |
| 0.6523 | 11970 | 6.1744 | - |
| 0.6528 | 11980 | 6.2602 | - |
| 0.6534 | 11990 | 6.2495 | - |
| 0.6539 | 12000 | 6.1693 | 0.6625 |
| 0.6545 | 12010 | 6.1397 | - |
| 0.6550 | 12020 | 6.2595 | - |
| 0.6556 | 12030 | 6.3883 | - |
| 0.6561 | 12040 | 6.3214 | - |
| 0.6566 | 12050 | 6.0738 | - |
| 0.6572 | 12060 | 6.5109 | - |
| 0.6577 | 12070 | 6.3116 | - |
| 0.6583 | 12080 | 6.2826 | - |
| 0.6588 | 12090 | 6.3446 | - |
| 0.6594 | 12100 | 6.3005 | - |
| 0.6599 | 12110 | 6.256 | - |
| 0.6605 | 12120 | 6.2181 | - |
| 0.6610 | 12130 | 6.3286 | - |
| 0.6615 | 12140 | 6.1738 | - |
| 0.6621 | 12150 | 6.6923 | - |
| 0.6626 | 12160 | 6.3391 | - |
| 0.6632 | 12170 | 6.2374 | - |
| 0.6637 | 12180 | 6.3342 | - |
| 0.6643 | 12190 | 6.1921 | - |
| 0.6648 | 12200 | 6.4095 | - |
| 0.6654 | 12210 | 6.2689 | - |
| 0.6659 | 12220 | 6.3106 | - |
| 0.6664 | 12230 | 6.2756 | - |
| 0.6670 | 12240 | 6.2884 | - |
| 0.6675 | 12250 | 6.3213 | - |
| 0.6681 | 12260 | 6.0858 | - |
| 0.6686 | 12270 | 6.2898 | - |
| 0.6692 | 12280 | 6.2696 | - |
| 0.6697 | 12290 | 6.5851 | - |
| 0.6703 | 12300 | 6.4269 | - |
| 0.6708 | 12310 | 6.2373 | - |
| 0.6714 | 12320 | 6.3366 | - |
| 0.6719 | 12330 | 6.1535 | - |
| 0.6724 | 12340 | 6.3414 | - |
| 0.6730 | 12350 | 6.2143 | - |
| 0.6735 | 12360 | 6.2074 | - |
| 0.6741 | 12370 | 6.0688 | - |
| 0.6746 | 12380 | 6.253 | - |
| 0.6752 | 12390 | 6.0802 | - |
| 0.6757 | 12400 | 6.158 | - |
| 0.6763 | 12410 | 6.408 | - |
| 0.6768 | 12420 | 6.0907 | - |
| 0.6773 | 12430 | 6.2232 | - |
| 0.6779 | 12440 | 6.2568 | - |
| 0.6784 | 12450 | 6.4261 | - |
| 0.6790 | 12460 | 6.2082 | - |
| 0.6795 | 12470 | 6.3312 | - |
| 0.6801 | 12480 | 6.1839 | - |
| 0.6806 | 12490 | 6.2688 | - |
| 0.6812 | 12500 | 6.352 | - |
| 0.6817 | 12510 | 6.2593 | - |
| 0.6823 | 12520 | 6.1974 | - |
| 0.6828 | 12530 | 6.1949 | - |
| 0.6833 | 12540 | 6.1794 | - |
| 0.6839 | 12550 | 6.3204 | - |
| 0.6844 | 12560 | 6.121 | - |
| 0.6850 | 12570 | 6.4873 | - |
| 0.6855 | 12580 | 6.1829 | - |
| 0.6861 | 12590 | 6.4489 | - |
| 0.6866 | 12600 | 6.2582 | - |
| 0.6872 | 12610 | 6.1829 | - |
| 0.6877 | 12620 | 6.3133 | - |
| 0.6882 | 12630 | 6.0879 | - |
| 0.6888 | 12640 | 6.1153 | - |
| 0.6893 | 12650 | 6.3218 | - |
| 0.6899 | 12660 | 6.1276 | - |
| 0.6904 | 12670 | 6.0637 | - |
| 0.6910 | 12680 | 6.1471 | - |
| 0.6915 | 12690 | 6.302 | - |
| 0.6921 | 12700 | 6.2505 | - |
| 0.6926 | 12710 | 6.1216 | - |
| 0.6932 | 12720 | 6.3172 | - |
| 0.6937 | 12730 | 6.1703 | - |
| 0.6942 | 12740 | 6.0597 | - |
| 0.6948 | 12750 | 6.1939 | - |
| 0.6953 | 12760 | 6.4529 | - |
| 0.6959 | 12770 | 6.3592 | - |
| 0.6964 | 12780 | 6.3093 | - |
| 0.6970 | 12790 | 6.2007 | - |
| 0.6975 | 12800 | 6.2885 | - |
| 0.6981 | 12810 | 6.2933 | - |
| 0.6986 | 12820 | 6.3749 | - |
| 0.6991 | 12830 | 6.246 | - |
| 0.6997 | 12840 | 6.149 | - |
| 0.7002 | 12850 | 6.3327 | - |
| 0.7008 | 12860 | 6.2364 | - |
| 0.7013 | 12870 | 6.0438 | - |
| 0.7019 | 12880 | 6.134 | - |
| 0.7024 | 12890 | 6.3003 | - |
| 0.7030 | 12900 | 6.3018 | - |
| 0.7035 | 12910 | 6.3864 | - |
| 0.7040 | 12920 | 6.1768 | - |
| 0.7046 | 12930 | 6.3982 | - |
| 0.7051 | 12940 | 6.0717 | - |
| 0.7057 | 12950 | 5.926 | - |
| 0.7062 | 12960 | 6.2966 | - |
| 0.7068 | 12970 | 6.2071 | - |
| 0.7073 | 12980 | 6.2254 | - |
| 0.7079 | 12990 | 6.1926 | - |
| 0.7084 | 13000 | 6.3894 | 0.6743 |
| 0.7090 | 13010 | 6.1227 | - |
| 0.7095 | 13020 | 6.5743 | - |
| 0.7100 | 13030 | 6.055 | - |
| 0.7106 | 13040 | 6.2505 | - |
| 0.7111 | 13050 | 6.3362 | - |
| 0.7117 | 13060 | 6.2988 | - |
| 0.7122 | 13070 | 6.0154 | - |
| 0.7128 | 13080 | 6.3128 | - |
| 0.7133 | 13090 | 6.2434 | - |
| 0.7139 | 13100 | 6.1086 | - |
| 0.7144 | 13110 | 6.302 | - |
| 0.7149 | 13120 | 6.2056 | - |
| 0.7155 | 13130 | 6.2682 | - |
| 0.7160 | 13140 | 6.3603 | - |
| 0.7166 | 13150 | 6.0961 | - |
| 0.7171 | 13160 | 6.0995 | - |
| 0.7177 | 13170 | 6.0371 | - |
| 0.7182 | 13180 | 6.0491 | - |
| 0.7188 | 13190 | 6.0995 | - |
| 0.7193 | 13200 | 6.2235 | - |
| 0.7199 | 13210 | 5.9992 | - |
| 0.7204 | 13220 | 6.0355 | - |
| 0.7209 | 13230 | 6.0856 | - |
| 0.7215 | 13240 | 6.2403 | - |
| 0.7220 | 13250 | 6.2837 | - |
| 0.7226 | 13260 | 6.4257 | - |
| 0.7231 | 13270 | 6.3632 | - |
| 0.7237 | 13280 | 6.3639 | - |
| 0.7242 | 13290 | 6.1365 | - |
| 0.7248 | 13300 | 6.0158 | - |
| 0.7253 | 13310 | 6.0127 | - |
| 0.7258 | 13320 | 6.081 | - |
| 0.7264 | 13330 | 6.3075 | - |
| 0.7269 | 13340 | 6.1475 | - |
| 0.7275 | 13350 | 6.1694 | - |
| 0.7280 | 13360 | 5.9961 | - |
| 0.7286 | 13370 | 6.3643 | - |
| 0.7291 | 13380 | 6.2159 | - |
| 0.7297 | 13390 | 6.0662 | - |
| 0.7302 | 13400 | 6.2296 | - |
| 0.7308 | 13410 | 6.1634 | - |
| 0.7313 | 13420 | 6.1762 | - |
| 0.7318 | 13430 | 6.3626 | - |
| 0.7324 | 13440 | 6.281 | - |
| 0.7329 | 13450 | 6.038 | - |
| 0.7335 | 13460 | 6.5049 | - |
| 0.7340 | 13470 | 6.2083 | - |
| 0.7346 | 13480 | 6.1935 | - |
| 0.7351 | 13490 | 6.1256 | - |
| 0.7357 | 13500 | 6.1541 | - |
| 0.7362 | 13510 | 6.0882 | - |
| 0.7367 | 13520 | 6.2236 | - |
| 0.7373 | 13530 | 6.2949 | - |
| 0.7378 | 13540 | 6.3433 | - |
| 0.7384 | 13550 | 6.0872 | - |
| 0.7389 | 13560 | 6.2832 | - |
| 0.7395 | 13570 | 6.0438 | - |
| 0.7400 | 13580 | 6.1059 | - |
| 0.7406 | 13590 | 6.2669 | - |
| 0.7411 | 13600 | 6.1896 | - |
| 0.7416 | 13610 | 6.2554 | - |
| 0.7422 | 13620 | 6.0979 | - |
| 0.7427 | 13630 | 6.1521 | - |
| 0.7433 | 13640 | 6.2451 | - |
| 0.7438 | 13650 | 6.2535 | - |
| 0.7444 | 13660 | 6.4082 | - |
| 0.7449 | 13670 | 6.2058 | - |
| 0.7455 | 13680 | 5.9853 | - |
| 0.7460 | 13690 | 6.0054 | - |
| 0.7466 | 13700 | 5.9116 | - |
| 0.7471 | 13710 | 6.1284 | - |
| 0.7476 | 13720 | 6.2769 | - |
| 0.7482 | 13730 | 6.1093 | - |
| 0.7487 | 13740 | 6.3509 | - |
| 0.7493 | 13750 | 6.2589 | - |
| 0.7498 | 13760 | 6.1261 | - |
| 0.7504 | 13770 | 5.9761 | - |
| 0.7509 | 13780 | 6.1377 | - |
| 0.7515 | 13790 | 6.1639 | - |
| 0.7520 | 13800 | 6.1897 | - |
| 0.7525 | 13810 | 6.1165 | - |
| 0.7531 | 13820 | 6.0729 | - |
| 0.7536 | 13830 | 6.0782 | - |
| 0.7542 | 13840 | 6.3813 | - |
| 0.7547 | 13850 | 6.1898 | - |
| 0.7553 | 13860 | 6.1141 | - |
| 0.7558 | 13870 | 5.9763 | - |
| 0.7564 | 13880 | 6.1387 | - |
| 0.7569 | 13890 | 6.0647 | - |
| 0.7575 | 13900 | 6.0995 | - |
| 0.7580 | 13910 | 6.0857 | - |
| 0.7585 | 13920 | 6.0725 | - |
| 0.7591 | 13930 | 5.8344 | - |
| 0.7596 | 13940 | 6.1597 | - |
| 0.7602 | 13950 | 6.1085 | - |
| 0.7607 | 13960 | 6.0003 | - |
| 0.7613 | 13970 | 5.933 | - |
| 0.7618 | 13980 | 6.1487 | - |
| 0.7624 | 13990 | 6.3134 | - |
| 0.7629 | 14000 | 5.8993 | 0.6709 |
| 0.7634 | 14010 | 6.2152 | - |
| 0.7640 | 14020 | 6.1073 | - |
| 0.7645 | 14030 | 5.8387 | - |
| 0.7651 | 14040 | 5.8673 | - |
| 0.7656 | 14050 | 6.2114 | - |
| 0.7662 | 14060 | 6.1065 | - |
| 0.7667 | 14070 | 6.24 | - |
| 0.7673 | 14080 | 5.691 | - |
| 0.7678 | 14090 | 6.0704 | - |
| 0.7684 | 14100 | 5.9683 | - |
| 0.7689 | 14110 | 5.9526 | - |
| 0.7694 | 14120 | 6.1656 | - |
| 0.7700 | 14130 | 6.1071 | - |
| 0.7705 | 14140 | 5.9906 | - |
| 0.7711 | 14150 | 6.0844 | - |
| 0.7716 | 14160 | 6.0475 | - |
| 0.7722 | 14170 | 6.0391 | - |
| 0.7727 | 14180 | 6.0952 | - |
| 0.7733 | 14190 | 6.3405 | - |
| 0.7738 | 14200 | 6.0467 | - |
| 0.7743 | 14210 | 6.243 | - |
| 0.7749 | 14220 | 6.1109 | - |
| 0.7754 | 14230 | 6.134 | - |
| 0.7760 | 14240 | 6.1817 | - |
| 0.7765 | 14250 | 5.9965 | - |
| 0.7771 | 14260 | 6.1201 | - |
| 0.7776 | 14270 | 5.8983 | - |
| 0.7782 | 14280 | 6.0442 | - |
| 0.7787 | 14290 | 5.9296 | - |
| 0.7792 | 14300 | 5.9939 | - |
| 0.7798 | 14310 | 6.0171 | - |
| 0.7803 | 14320 | 6.1784 | - |
| 0.7809 | 14330 | 6.1785 | - |
| 0.7814 | 14340 | 6.135 | - |
| 0.7820 | 14350 | 6.2456 | - |
| 0.7825 | 14360 | 6.1387 | - |
| 0.7831 | 14370 | 6.2324 | - |
| 0.7836 | 14380 | 6.2713 | - |
| 0.7842 | 14390 | 6.091 | - |
| 0.7847 | 14400 | 6.167 | - |
| 0.7852 | 14410 | 6.125 | - |
| 0.7858 | 14420 | 6.1568 | - |
| 0.7863 | 14430 | 6.1267 | - |
| 0.7869 | 14440 | 5.9984 | - |
| 0.7874 | 14450 | 6.1204 | - |
| 0.7880 | 14460 | 5.9599 | - |
| 0.7885 | 14470 | 6.0471 | - |
| 0.7891 | 14480 | 5.9514 | - |
| 0.7896 | 14490 | 6.1668 | - |
| 0.7901 | 14500 | 6.2129 | - |
| 0.7907 | 14510 | 5.9926 | - |
| 0.7912 | 14520 | 6.018 | - |
| 0.7918 | 14530 | 6.3509 | - |
| 0.7923 | 14540 | 6.0939 | - |
| 0.7929 | 14550 | 6.2043 | - |
| 0.7934 | 14560 | 6.1367 | - |
| 0.7940 | 14570 | 5.6858 | - |
| 0.7945 | 14580 | 5.9162 | - |
| 0.7951 | 14590 | 6.0892 | - |
| 0.7956 | 14600 | 6.064 | - |
| 0.7961 | 14610 | 5.9479 | - |
| 0.7967 | 14620 | 6.2391 | - |
| 0.7972 | 14630 | 5.862 | - |
| 0.7978 | 14640 | 6.295 | - |
| 0.7983 | 14650 | 5.9554 | - |
| 0.7989 | 14660 | 6.0692 | - |
| 0.7994 | 14670 | 5.8952 | - |
| 0.8000 | 14680 | 6.259 | - |
| 0.8005 | 14690 | 6.3728 | - |
| 0.8010 | 14700 | 5.9493 | - |
| 0.8016 | 14710 | 5.9454 | - |
| 0.8021 | 14720 | 6.0195 | - |
| 0.8027 | 14730 | 6.0975 | - |
| 0.8032 | 14740 | 6.2664 | - |
| 0.8038 | 14750 | 5.8184 | - |
| 0.8043 | 14760 | 6.1007 | - |
| 0.8049 | 14770 | 6.0146 | - |
| 0.8054 | 14780 | 6.0923 | - |
| 0.8060 | 14790 | 5.856 | - |
| 0.8065 | 14800 | 6.0679 | - |
| 0.8070 | 14810 | 6.2853 | - |
| 0.8076 | 14820 | 5.9374 | - |
| 0.8081 | 14830 | 6.2117 | - |
| 0.8087 | 14840 | 6.207 | - |
| 0.8092 | 14850 | 6.1526 | - |
| 0.8098 | 14860 | 5.9947 | - |
| 0.8103 | 14870 | 6.0142 | - |
| 0.8109 | 14880 | 6.0428 | - |
| 0.8114 | 14890 | 6.138 | - |
| 0.8119 | 14900 | 6.1008 | - |
| 0.8125 | 14910 | 6.1654 | - |
| 0.8130 | 14920 | 6.087 | - |
| 0.8136 | 14930 | 6.3423 | - |
| 0.8141 | 14940 | 6.2544 | - |
| 0.8147 | 14950 | 6.2641 | - |
| 0.8152 | 14960 | 6.0004 | - |
| 0.8158 | 14970 | 6.3328 | - |
| 0.8163 | 14980 | 6.1373 | - |
| 0.8168 | 14990 | 5.9457 | - |
| 0.8174 | 15000 | 6.0359 | 0.6849 |
| 0.8179 | 15010 | 6.1059 | - |
| 0.8185 | 15020 | 6.0064 | - |
| 0.8190 | 15030 | 5.9367 | - |
| 0.8196 | 15040 | 6.26 | - |
| 0.8201 | 15050 | 5.8852 | - |
| 0.8207 | 15060 | 6.1803 | - |
| 0.8212 | 15070 | 5.9494 | - |
| 0.8218 | 15080 | 6.0288 | - |
| 0.8223 | 15090 | 6.3174 | - |
| 0.8228 | 15100 | 5.8188 | - |
| 0.8234 | 15110 | 6.022 | - |
| 0.8239 | 15120 | 6.0606 | - |
| 0.8245 | 15130 | 6.1626 | - |
| 0.8250 | 15140 | 5.772 | - |
| 0.8256 | 15150 | 5.9765 | - |
| 0.8261 | 15160 | 6.1011 | - |
| 0.8267 | 15170 | 6.3082 | - |
| 0.8272 | 15180 | 6.0106 | - |
| 0.8277 | 15190 | 6.3261 | - |
| 0.8283 | 15200 | 5.875 | - |
| 0.8288 | 15210 | 5.8271 | - |
| 0.8294 | 15220 | 6.466 | - |
| 0.8299 | 15230 | 5.9033 | - |
| 0.8305 | 15240 | 6.169 | - |
| 0.8310 | 15250 | 6.1515 | - |
| 0.8316 | 15260 | 5.9762 | - |
| 0.8321 | 15270 | 6.0413 | - |
| 0.8327 | 15280 | 6.1227 | - |
| 0.8332 | 15290 | 6.2654 | - |
| 0.8337 | 15300 | 6.1052 | - |
| 0.8343 | 15310 | 6.2313 | - |
| 0.8348 | 15320 | 6.0966 | - |
| 0.8354 | 15330 | 5.8208 | - |
| 0.8359 | 15340 | 5.791 | - |
| 0.8365 | 15350 | 6.2534 | - |
| 0.8370 | 15360 | 5.9181 | - |
| 0.8376 | 15370 | 6.1517 | - |
| 0.8381 | 15380 | 6.1119 | - |
| 0.8386 | 15390 | 6.0001 | - |
| 0.8392 | 15400 | 6.1371 | - |
| 0.8397 | 15410 | 6.0099 | - |
| 0.8403 | 15420 | 6.0785 | - |
| 0.8408 | 15430 | 5.9549 | - |
| 0.8414 | 15440 | 5.7056 | - |
| 0.8419 | 15450 | 6.1703 | - |
| 0.8425 | 15460 | 6.2109 | - |
| 0.8430 | 15470 | 5.9234 | - |
| 0.8436 | 15480 | 5.9329 | - |
| 0.8441 | 15490 | 6.0731 | - |
| 0.8446 | 15500 | 5.9042 | - |
| 0.8452 | 15510 | 6.0279 | - |
| 0.8457 | 15520 | 6.1041 | - |
| 0.8463 | 15530 | 6.292 | - |
| 0.8468 | 15540 | 6.0768 | - |
| 0.8474 | 15550 | 6.0182 | - |
| 0.8479 | 15560 | 5.9813 | - |
| 0.8485 | 15570 | 6.1457 | - |
| 0.8490 | 15580 | 5.9047 | - |
| 0.8495 | 15590 | 5.9654 | - |
| 0.8501 | 15600 | 6.1113 | - |
| 0.8506 | 15610 | 6.2618 | - |
| 0.8512 | 15620 | 6.0824 | - |
| 0.8517 | 15630 | 6.1461 | - |
| 0.8523 | 15640 | 5.933 | - |
| 0.8528 | 15650 | 5.8813 | - |
| 0.8534 | 15660 | 5.8569 | - |
| 0.8539 | 15670 | 6.0833 | - |
| 0.8544 | 15680 | 5.758 | - |
| 0.8550 | 15690 | 6.0983 | - |
| 0.8555 | 15700 | 6.0653 | - |
| 0.8561 | 15710 | 5.9869 | - |
| 0.8566 | 15720 | 6.1541 | - |
| 0.8572 | 15730 | 6.16 | - |
| 0.8577 | 15740 | 6.1943 | - |
| 0.8583 | 15750 | 6.0049 | - |
| 0.8588 | 15760 | 5.9264 | - |
| 0.8594 | 15770 | 6.0828 | - |
| 0.8599 | 15780 | 5.9052 | - |
| 0.8604 | 15790 | 5.9191 | - |
| 0.8610 | 15800 | 6.0602 | - |
| 0.8615 | 15810 | 5.9778 | - |
| 0.8621 | 15820 | 5.9461 | - |
| 0.8626 | 15830 | 6.1393 | - |
| 0.8632 | 15840 | 5.6771 | - |
| 0.8637 | 15850 | 5.9614 | - |
| 0.8643 | 15860 | 6.0193 | - |
| 0.8648 | 15870 | 5.9235 | - |
| 0.8653 | 15880 | 6.1359 | - |
| 0.8659 | 15890 | 6.019 | - |
| 0.8664 | 15900 | 6.1176 | - |
| 0.8670 | 15910 | 5.897 | - |
| 0.8675 | 15920 | 5.9411 | - |
| 0.8681 | 15930 | 6.0404 | - |
| 0.8686 | 15940 | 6.1481 | - |
| 0.8692 | 15950 | 5.7859 | - |
| 0.8697 | 15960 | 5.9925 | - |
| 0.8703 | 15970 | 6.0319 | - |
| 0.8708 | 15980 | 6.1486 | - |
| 0.8713 | 15990 | 5.9504 | - |
| 0.8719 | 16000 | 6.0577 | 0.6787 |
| 0.8724 | 16010 | 5.8618 | - |
| 0.8730 | 16020 | 5.8582 | - |
| 0.8735 | 16030 | 6.1119 | - |
| 0.8741 | 16040 | 5.8022 | - |
| 0.8746 | 16050 | 6.0492 | - |
| 0.8752 | 16060 | 5.8845 | - |
| 0.8757 | 16070 | 5.837 | - |
| 0.8762 | 16080 | 6.1271 | - |
| 0.8768 | 16090 | 6.1907 | - |
| 0.8773 | 16100 | 6.0579 | - |
| 0.8779 | 16110 | 5.7405 | - |
| 0.8784 | 16120 | 6.0334 | - |
| 0.8790 | 16130 | 5.9978 | - |
| 0.8795 | 16140 | 6.0883 | - |
| 0.8801 | 16150 | 5.805 | - |
| 0.8806 | 16160 | 5.8888 | - |
| 0.8812 | 16170 | 6.095 | - |
| 0.8817 | 16180 | 6.0584 | - |
| 0.8822 | 16190 | 5.9797 | - |
| 0.8828 | 16200 | 5.9213 | - |
| 0.8833 | 16210 | 6.2204 | - |
| 0.8839 | 16220 | 6.1912 | - |
| 0.8844 | 16230 | 6.0743 | - |
| 0.8850 | 16240 | 5.87 | - |
| 0.8855 | 16250 | 6.051 | - |
| 0.8861 | 16260 | 6.0365 | - |
| 0.8866 | 16270 | 5.9941 | - |
| 0.8871 | 16280 | 6.0548 | - |
| 0.8877 | 16290 | 5.9687 | - |
| 0.8882 | 16300 | 5.8674 | - |
| 0.8888 | 16310 | 5.9445 | - |
| 0.8893 | 16320 | 5.9635 | - |
| 0.8899 | 16330 | 6.0997 | - |
| 0.8904 | 16340 | 5.8811 | - |
| 0.8910 | 16350 | 6.1895 | - |
| 0.8915 | 16360 | 5.9302 | - |
| 0.8920 | 16370 | 5.9723 | - |
| 0.8926 | 16380 | 6.0606 | - |
| 0.8931 | 16390 | 6.1146 | - |
| 0.8937 | 16400 | 6.0392 | - |
| 0.8942 | 16410 | 6.0723 | - |
| 0.8948 | 16420 | 5.9673 | - |
| 0.8953 | 16430 | 5.9086 | - |
| 0.8959 | 16440 | 5.7594 | - |
| 0.8964 | 16450 | 5.9933 | - |
| 0.8970 | 16460 | 6.0917 | - |
| 0.8975 | 16470 | 6.0039 | - |
| 0.8980 | 16480 | 6.1426 | - |
| 0.8986 | 16490 | 5.8798 | - |
| 0.8991 | 16500 | 5.8838 | - |
| 0.8997 | 16510 | 5.8645 | - |
| 0.9002 | 16520 | 6.4033 | - |
| 0.9008 | 16530 | 5.9681 | - |
| 0.9013 | 16540 | 5.8725 | - |
| 0.9019 | 16550 | 6.0205 | - |
| 0.9024 | 16560 | 6.0768 | - |
| 0.9029 | 16570 | 6.0623 | - |
| 0.9035 | 16580 | 5.9255 | - |
| 0.9040 | 16590 | 6.1983 | - |
| 0.9046 | 16600 | 5.8805 | - |
| 0.9051 | 16610 | 5.9868 | - |
| 0.9057 | 16620 | 6.0397 | - |
| 0.9062 | 16630 | 5.9857 | - |
| 0.9068 | 16640 | 6.2815 | - |
| 0.9073 | 16650 | 6.202 | - |
| 0.9079 | 16660 | 5.9818 | - |
| 0.9084 | 16670 | 5.8221 | - |
| 0.9089 | 16680 | 5.9575 | - |
| 0.9095 | 16690 | 6.2472 | - |
| 0.9100 | 16700 | 5.8917 | - |
| 0.9106 | 16710 | 5.8243 | - |
| 0.9111 | 16720 | 5.8284 | - |
| 0.9117 | 16730 | 5.8901 | - |
| 0.9122 | 16740 | 6.0657 | - |
| 0.9128 | 16750 | 5.965 | - |
| 0.9133 | 16760 | 6.1145 | - |
| 0.9138 | 16770 | 5.9792 | - |
| 0.9144 | 16780 | 6.0539 | - |
| 0.9149 | 16790 | 5.9315 | - |
| 0.9155 | 16800 | 6.0809 | - |
| 0.9160 | 16810 | 5.9401 | - |
| 0.9166 | 16820 | 5.959 | - |
| 0.9171 | 16830 | 6.3247 | - |
| 0.9177 | 16840 | 6.1742 | - |
| 0.9182 | 16850 | 6.0082 | - |
| 0.9188 | 16860 | 5.7142 | - |
| 0.9193 | 16870 | 5.9911 | - |
| 0.9198 | 16880 | 6.0264 | - |
| 0.9204 | 16890 | 6.0535 | - |
| 0.9209 | 16900 | 6.0652 | - |
| 0.9215 | 16910 | 5.9019 | - |
| 0.9220 | 16920 | 5.7745 | - |
| 0.9226 | 16930 | 6.0375 | - |
| 0.9231 | 16940 | 6.1007 | - |
| 0.9237 | 16950 | 5.9882 | - |
| 0.9242 | 16960 | 5.9751 | - |
| 0.9247 | 16970 | 5.755 | - |
| 0.9253 | 16980 | 5.8313 | - |
| 0.9258 | 16990 | 5.8091 | - |
| 0.9264 | 17000 | 6.1784 | 0.6766 |
| 0.9269 | 17010 | 6.0994 | - |
| 0.9275 | 17020 | 5.9793 | - |
| 0.9280 | 17030 | 5.8516 | - |
| 0.9286 | 17040 | 5.9635 | - |
| 0.9291 | 17050 | 6.0084 | - |
| 0.9296 | 17060 | 6.005 | - |
| 0.9302 | 17070 | 5.9652 | - |
| 0.9307 | 17080 | 6.0636 | - |
| 0.9313 | 17090 | 6.0471 | - |
| 0.9318 | 17100 | 6.1346 | - |
| 0.9324 | 17110 | 6.1062 | - |
| 0.9329 | 17120 | 5.9275 | - |
| 0.9335 | 17130 | 5.9085 | - |
| 0.9340 | 17140 | 6.1333 | - |
| 0.9346 | 17150 | 5.752 | - |
| 0.9351 | 17160 | 5.7757 | - |
| 0.9356 | 17170 | 6.1109 | - |
| 0.9362 | 17180 | 6.1712 | - |
| 0.9367 | 17190 | 5.9232 | - |
| 0.9373 | 17200 | 5.9342 | - |
| 0.9378 | 17210 | 5.7272 | - |
| 0.9384 | 17220 | 5.8474 | - |
| 0.9389 | 17230 | 5.8788 | - |
| 0.9395 | 17240 | 6.0857 | - |
| 0.9400 | 17250 | 5.8283 | - |
| 0.9405 | 17260 | 5.8828 | - |
| 0.9411 | 17270 | 5.9698 | - |
| 0.9416 | 17280 | 5.9597 | - |
| 0.9422 | 17290 | 5.6236 | - |
| 0.9427 | 17300 | 5.9286 | - |
| 0.9433 | 17310 | 5.7301 | - |
| 0.9438 | 17320 | 5.9615 | - |
| 0.9444 | 17330 | 5.6984 | - |
| 0.9449 | 17340 | 5.9779 | - |
| 0.9455 | 17350 | 5.9033 | - |
| 0.9460 | 17360 | 5.9838 | - |
| 0.9465 | 17370 | 5.9723 | - |
| 0.9471 | 17380 | 5.9848 | - |
| 0.9476 | 17390 | 5.8565 | - |
| 0.9482 | 17400 | 5.8041 | - |
| 0.9487 | 17410 | 5.9665 | - |
| 0.9493 | 17420 | 5.913 | - |
| 0.9498 | 17430 | 6.1162 | - |
| 0.9504 | 17440 | 5.8297 | - |
| 0.9509 | 17450 | 5.8355 | - |
| 0.9514 | 17460 | 5.8872 | - |
| 0.9520 | 17470 | 5.9385 | - |
| 0.9525 | 17480 | 5.9437 | - |
| 0.9531 | 17490 | 6.1078 | - |
| 0.9536 | 17500 | 5.9857 | - |
| 0.9542 | 17510 | 5.9939 | - |
| 0.9547 | 17520 | 6.1567 | - |
| 0.9553 | 17530 | 5.9716 | - |
| 0.9558 | 17540 | 6.1307 | - |
| 0.9564 | 17550 | 6.0449 | - |
| 0.9569 | 17560 | 5.8845 | - |
| 0.9574 | 17570 | 5.9021 | - |
| 0.9580 | 17580 | 5.8809 | - |
| 0.9585 | 17590 | 6.0047 | - |
| 0.9591 | 17600 | 6.0229 | - |
| 0.9596 | 17610 | 6.1139 | - |
| 0.9602 | 17620 | 6.0104 | - |
| 0.9607 | 17630 | 5.6525 | - |
| 0.9613 | 17640 | 5.9619 | - |
| 0.9618 | 17650 | 5.9794 | - |
| 0.9623 | 17660 | 5.9271 | - |
| 0.9629 | 17670 | 5.8962 | - |
| 0.9634 | 17680 | 5.8416 | - |
| 0.9640 | 17690 | 6.0165 | - |
| 0.9645 | 17700 | 5.9813 | - |
| 0.9651 | 17710 | 5.8825 | - |
| 0.9656 | 17720 | 5.9143 | - |
| 0.9662 | 17730 | 6.1838 | - |
| 0.9667 | 17740 | 6.0242 | - |
| 0.9672 | 17750 | 6.0656 | - |
| 0.9678 | 17760 | 5.9659 | - |
| 0.9683 | 17770 | 5.8696 | - |
| 0.9689 | 17780 | 6.0074 | - |
| 0.9694 | 17790 | 5.9784 | - |
| 0.9700 | 17800 | 5.9585 | - |
| 0.9705 | 17810 | 5.7965 | - |
| 0.9711 | 17820 | 5.9768 | - |
| 0.9716 | 17830 | 6.1496 | - |
| 0.9722 | 17840 | 6.0256 | - |
| 0.9727 | 17850 | 6.1192 | - |
| 0.9732 | 17860 | 5.899 | - |
| 0.9738 | 17870 | 6.1613 | - |
| 0.9743 | 17880 | 6.2023 | - |
| 0.9749 | 17890 | 5.7554 | - |
| 0.9754 | 17900 | 6.0038 | - |
| 0.9760 | 17910 | 5.7008 | - |
| 0.9765 | 17920 | 5.6686 | - |
| 0.9771 | 17930 | 5.815 | - |
| 0.9776 | 17940 | 5.7979 | - |
| 0.9781 | 17950 | 5.9028 | - |
| 0.9787 | 17960 | 5.9917 | - |
| 0.9792 | 17970 | 5.7425 | - |
| 0.9798 | 17980 | 6.0715 | - |
| 0.9803 | 17990 | 5.8457 | - |
| 0.9809 | 18000 | 5.8293 | 0.6773 |
| 0.9814 | 18010 | 5.9736 | - |
| 0.9820 | 18020 | 5.9162 | - |
| 0.9825 | 18030 | 5.7664 | - |
| 0.9831 | 18040 | 5.7828 | - |
| 0.9836 | 18050 | 6.1657 | - |
| 0.9841 | 18060 | 5.8895 | - |
| 0.9847 | 18070 | 5.9255 | - |
| 0.9852 | 18080 | 5.9303 | - |
| 0.9858 | 18090 | 5.9428 | - |
| 0.9863 | 18100 | 5.9347 | - |
| 0.9869 | 18110 | 6.1233 | - |
| 0.9874 | 18120 | 5.6448 | - |
| 0.9880 | 18130 | 6.2478 | - |
| 0.9885 | 18140 | 6.1077 | - |
| 0.9890 | 18150 | 6.0059 | - |
| 0.9896 | 18160 | 5.9402 | - |
| 0.9901 | 18170 | 5.7901 | - |
| 0.9907 | 18180 | 5.9455 | - |
| 0.9912 | 18190 | 5.8159 | - |
| 0.9918 | 18200 | 5.8905 | - |
| 0.9923 | 18210 | 5.9376 | - |
| 0.9929 | 18220 | 6.2016 | - |
| 0.9934 | 18230 | 6.1366 | - |
| 0.9940 | 18240 | 5.9439 | - |
| 0.9945 | 18250 | 6.0997 | - |
| 0.9950 | 18260 | 5.8911 | - |
| 0.9956 | 18270 | 6.0323 | - |
| 0.9961 | 18280 | 5.849 | - |
| 0.9967 | 18290 | 6.3401 | - |
| 0.9972 | 18300 | 5.9697 | - |
| 0.9978 | 18310 | 5.8733 | - |
| 0.9983 | 18320 | 5.8617 | - |
| 0.9989 | 18330 | 5.9856 | - |
| 0.9994 | 18340 | 5.9817 | - |
| 0.9999 | 18350 | 5.9722 | - |
| 1.0005 | 18360 | 6.0337 | - |
| 1.0010 | 18370 | 5.9433 | - |
| 1.0016 | 18380 | 5.9586 | - |
| 1.0021 | 18390 | 6.0582 | - |
| 1.0027 | 18400 | 5.9836 | - |
| 1.0032 | 18410 | 5.9315 | - |
| 1.0038 | 18420 | 5.8524 | - |
| 1.0043 | 18430 | 5.7924 | - |
| 1.0048 | 18440 | 5.8787 | - |
| 1.0054 | 18450 | 5.8526 | - |
| 1.0059 | 18460 | 5.8085 | - |
| 1.0065 | 18470 | 5.82 | - |
| 1.0070 | 18480 | 5.9321 | - |
| 1.0076 | 18490 | 6.0088 | - |
| 1.0081 | 18500 | 6.045 | - |
| 1.0087 | 18510 | 5.9326 | - |
| 1.0092 | 18520 | 5.9991 | - |
| 1.0098 | 18530 | 5.9616 | - |
| 1.0103 | 18540 | 5.9167 | - |
| 1.0108 | 18550 | 5.698 | - |
| 1.0114 | 18560 | 5.9083 | - |
| 1.0119 | 18570 | 6.076 | - |
| 1.0125 | 18580 | 5.8842 | - |
| 1.0130 | 18590 | 5.9027 | - |
| 1.0136 | 18600 | 5.7748 | - |
| 1.0141 | 18610 | 5.7468 | - |
| 1.0147 | 18620 | 6.0244 | - |
| 1.0152 | 18630 | 5.6976 | - |
| 1.0157 | 18640 | 5.9608 | - |
| 1.0163 | 18650 | 5.9556 | - |
| 1.0168 | 18660 | 6.0848 | - |
| 1.0174 | 18670 | 6.0796 | - |
| 1.0179 | 18680 | 5.8466 | - |
| 1.0185 | 18690 | 5.9227 | - |
| 1.0190 | 18700 | 5.7808 | - |
| 1.0196 | 18710 | 5.9096 | - |
| 1.0201 | 18720 | 5.9434 | - |
| 1.0207 | 18730 | 5.956 | - |
| 1.0212 | 18740 | 6.0831 | - |
| 1.0217 | 18750 | 5.9605 | - |
| 1.0223 | 18760 | 5.8624 | - |
| 1.0228 | 18770 | 5.8374 | - |
| 1.0234 | 18780 | 5.9078 | - |
| 1.0239 | 18790 | 5.8037 | - |
| 1.0245 | 18800 | 5.8279 | - |
| 1.0250 | 18810 | 5.6056 | - |
| 1.0256 | 18820 | 5.8509 | - |
| 1.0261 | 18830 | 6.0387 | - |
| 1.0266 | 18840 | 5.7657 | - |
| 1.0272 | 18850 | 5.8545 | - |
| 1.0277 | 18860 | 5.7768 | - |
| 1.0283 | 18870 | 5.931 | - |
| 1.0288 | 18880 | 5.8354 | - |
| 1.0294 | 18890 | 5.7838 | - |
| 1.0299 | 18900 | 6.0632 | - |
| 1.0305 | 18910 | 5.9155 | - |
| 1.0310 | 18920 | 5.9991 | - |
| 1.0316 | 18930 | 5.7639 | - |
| 1.0321 | 18940 | 6.0174 | - |
| 1.0326 | 18950 | 6.1368 | - |
| 1.0332 | 18960 | 5.6519 | - |
| 1.0337 | 18970 | 6.0982 | - |
| 1.0343 | 18980 | 5.8701 | - |
| 1.0348 | 18990 | 5.888 | - |
| 1.0354 | 19000 | 5.9899 | 0.6836 |
| 1.0359 | 19010 | 5.8811 | - |
| 1.0365 | 19020 | 5.9352 | - |
| 1.0370 | 19030 | 5.7247 | - |
| 1.0375 | 19040 | 5.9272 | - |
| 1.0381 | 19050 | 5.7434 | - |
| 1.0386 | 19060 | 5.7268 | - |
| 1.0392 | 19070 | 5.7372 | - |
| 1.0397 | 19080 | 5.7621 | - |
| 1.0403 | 19090 | 6.09 | - |
| 1.0408 | 19100 | 5.8984 | - |
| 1.0414 | 19110 | 6.0309 | - |
| 1.0419 | 19120 | 5.8809 | - |
| 1.0425 | 19130 | 5.7264 | - |
| 1.0430 | 19140 | 5.9355 | - |
| 1.0435 | 19150 | 6.0427 | - |
| 1.0441 | 19160 | 5.8753 | - |
| 1.0446 | 19170 | 5.7595 | - |
| 1.0452 | 19180 | 5.9283 | - |
| 1.0457 | 19190 | 5.9661 | - |
| 1.0463 | 19200 | 5.8398 | - |
| 1.0468 | 19210 | 6.0788 | - |
| 1.0474 | 19220 | 5.5745 | - |
| 1.0479 | 19230 | 6.1415 | - |
| 1.0484 | 19240 | 5.9319 | - |
| 1.0490 | 19250 | 6.1778 | - |
| 1.0495 | 19260 | 6.0221 | - |
| 1.0501 | 19270 | 5.9374 | - |
| 1.0506 | 19280 | 5.8427 | - |
| 1.0512 | 19290 | 5.834 | - |
| 1.0517 | 19300 | 5.951 | - |
| 1.0523 | 19310 | 5.6753 | - |
| 1.0528 | 19320 | 5.6815 | - |
| 1.0533 | 19330 | 5.5665 | - |
| 1.0539 | 19340 | 5.8056 | - |
| 1.0544 | 19350 | 5.9032 | - |
| 1.0550 | 19360 | 6.0827 | - |
| 1.0555 | 19370 | 5.7528 | - |
| 1.0561 | 19380 | 5.5812 | - |
| 1.0566 | 19390 | 5.8706 | - |
| 1.0572 | 19400 | 6.0811 | - |
| 1.0577 | 19410 | 6.0508 | - |
| 1.0583 | 19420 | 5.9019 | - |
| 1.0588 | 19430 | 5.8566 | - |
| 1.0593 | 19440 | 5.9227 | - |
| 1.0599 | 19450 | 5.9522 | - |
| 1.0604 | 19460 | 5.7756 | - |
| 1.0610 | 19470 | 5.9979 | - |
| 1.0615 | 19480 | 5.9214 | - |
| 1.0621 | 19490 | 5.7155 | - |
| 1.0626 | 19500 | 5.8474 | - |
| 1.0632 | 19510 | 5.9543 | - |
| 1.0637 | 19520 | 5.8074 | - |
| 1.0642 | 19530 | 5.9686 | - |
| 1.0648 | 19540 | 5.7759 | - |
| 1.0653 | 19550 | 6.0478 | - |
| 1.0659 | 19560 | 5.8857 | - |
| 1.0664 | 19570 | 5.9714 | - |
| 1.0670 | 19580 | 5.6625 | - |
| 1.0675 | 19590 | 5.9883 | - |
| 1.0681 | 19600 | 5.7688 | - |
| 1.0686 | 19610 | 5.9429 | - |
| 1.0692 | 19620 | 5.9463 | - |
| 1.0697 | 19630 | 6.048 | - |
| 1.0702 | 19640 | 5.5378 | - |
| 1.0708 | 19650 | 5.8616 | - |
| 1.0713 | 19660 | 5.7845 | - |
| 1.0719 | 19670 | 5.8152 | - |
| 1.0724 | 19680 | 5.9642 | - |
| 1.0730 | 19690 | 5.8571 | - |
| 1.0735 | 19700 | 5.7606 | - |
| 1.0741 | 19710 | 5.8948 | - |
| 1.0746 | 19720 | 5.7714 | - |
| 1.0751 | 19730 | 6.034 | - |
| 1.0757 | 19740 | 5.8587 | - |
| 1.0762 | 19750 | 5.8159 | - |
| 1.0768 | 19760 | 6.0334 | - |
| 1.0773 | 19770 | 5.9195 | - |
| 1.0779 | 19780 | 5.8635 | - |
| 1.0784 | 19790 | 5.8225 | - |
| 1.0790 | 19800 | 5.977 | - |
| 1.0795 | 19810 | 6.0155 | - |
| 1.0801 | 19820 | 5.8737 | - |
| 1.0806 | 19830 | 6.0383 | - |
| 1.0811 | 19840 | 6.1447 | - |
| 1.0817 | 19850 | 5.7216 | - |
| 1.0822 | 19860 | 6.013 | - |
| 1.0828 | 19870 | 5.8183 | - |
| 1.0833 | 19880 | 5.6804 | - |
| 1.0839 | 19890 | 5.8341 | - |
| 1.0844 | 19900 | 5.8046 | - |
| 1.0850 | 19910 | 5.6202 | - |
| 1.0855 | 19920 | 5.7969 | - |
| 1.0860 | 19930 | 5.7087 | - |
| 1.0866 | 19940 | 5.6474 | - |
| 1.0871 | 19950 | 5.8814 | - |
| 1.0877 | 19960 | 5.8278 | - |
| 1.0882 | 19970 | 5.7063 | - |
| 1.0888 | 19980 | 5.8695 | - |
| 1.0893 | 19990 | 5.9189 | - |
| 1.0899 | 20000 | 5.8293 | 0.6817 |
| 1.0904 | 20010 | 5.8105 | - |
| 1.0909 | 20020 | 5.7242 | - |
| 1.0915 | 20030 | 5.9195 | - |
| 1.0920 | 20040 | 5.6062 | - |
| 1.0926 | 20050 | 5.7785 | - |
| 1.0931 | 20060 | 5.8454 | - |
| 1.0937 | 20070 | 5.6694 | - |
| 1.0942 | 20080 | 6.0052 | - |
| 1.0948 | 20090 | 6.0341 | - |
| 1.0953 | 20100 | 6.0187 | - |
| 1.0959 | 20110 | 5.6195 | - |
| 1.0964 | 20120 | 5.8602 | - |
| 1.0969 | 20130 | 5.8215 | - |
| 1.0975 | 20140 | 5.9741 | - |
| 1.0980 | 20150 | 5.9247 | - |
| 1.0986 | 20160 | 5.9993 | - |
| 1.0991 | 20170 | 5.769 | - |
| 1.0997 | 20180 | 5.6859 | - |
| 1.1002 | 20190 | 5.8302 | - |
| 1.1008 | 20200 | 5.7895 | - |
| 1.1013 | 20210 | 5.6693 | - |
| 1.1018 | 20220 | 5.93 | - |
| 1.1024 | 20230 | 5.6806 | - |
| 1.1029 | 20240 | 6.0053 | - |
| 1.1035 | 20250 | 5.5933 | - |
| 1.1040 | 20260 | 5.6481 | - |
| 1.1046 | 20270 | 5.825 | - |
| 1.1051 | 20280 | 5.9647 | - |
| 1.1057 | 20290 | 6.0301 | - |
| 1.1062 | 20300 | 5.8475 | - |
| 1.1068 | 20310 | 5.7795 | - |
| 1.1073 | 20320 | 5.8092 | - |
| 1.1078 | 20330 | 5.9727 | - |
| 1.1084 | 20340 | 6.0042 | - |
| 1.1089 | 20350 | 6.1804 | - |
| 1.1095 | 20360 | 5.7782 | - |
| 1.1100 | 20370 | 5.8302 | - |
| 1.1106 | 20380 | 5.8773 | - |
| 1.1111 | 20390 | 5.8525 | - |
| 1.1117 | 20400 | 5.8832 | - |
| 1.1122 | 20410 | 6.0264 | - |
| 1.1127 | 20420 | 5.6919 | - |
| 1.1133 | 20430 | 5.8473 | - |
| 1.1138 | 20440 | 5.8246 | - |
| 1.1144 | 20450 | 5.6735 | - |
| 1.1149 | 20460 | 5.8929 | - |
| 1.1155 | 20470 | 6.1116 | - |
| 1.1160 | 20480 | 6.0014 | - |
| 1.1166 | 20490 | 5.7073 | - |
| 1.1171 | 20500 | 5.726 | - |
| 1.1177 | 20510 | 5.8832 | - |
| 1.1182 | 20520 | 5.8501 | - |
| 1.1187 | 20530 | 5.7399 | - |
| 1.1193 | 20540 | 5.8335 | - |
| 1.1198 | 20550 | 5.8248 | - |
| 1.1204 | 20560 | 5.9638 | - |
| 1.1209 | 20570 | 5.6932 | - |
| 1.1215 | 20580 | 5.7174 | - |
| 1.1220 | 20590 | 5.887 | - |
| 1.1226 | 20600 | 5.9698 | - |
| 1.1231 | 20610 | 5.8764 | - |
| 1.1236 | 20620 | 5.4503 | - |
| 1.1242 | 20630 | 5.7976 | - |
| 1.1247 | 20640 | 5.8378 | - |
| 1.1253 | 20650 | 5.7228 | - |
| 1.1258 | 20660 | 5.8012 | - |
| 1.1264 | 20670 | 5.9416 | - |
| 1.1269 | 20680 | 5.9933 | - |
| 1.1275 | 20690 | 5.8818 | - |
| 1.1280 | 20700 | 6.0917 | - |
| 1.1285 | 20710 | 5.9114 | - |
| 1.1291 | 20720 | 5.652 | - |
| 1.1296 | 20730 | 5.8088 | - |
| 1.1302 | 20740 | 5.8552 | - |
| 1.1307 | 20750 | 5.959 | - |
| 1.1313 | 20760 | 5.6705 | - |
| 1.1318 | 20770 | 5.6958 | - |
| 1.1324 | 20780 | 5.9608 | - |
| 1.1329 | 20790 | 6.0596 | - |
| 1.1335 | 20800 | 5.9248 | - |
| 1.1340 | 20810 | 5.7469 | - |
| 1.1345 | 20820 | 5.8097 | - |
| 1.1351 | 20830 | 6.0221 | - |
| 1.1356 | 20840 | 5.9775 | - |
| 1.1362 | 20850 | 5.9494 | - |
| 1.1367 | 20860 | 5.7591 | - |
| 1.1373 | 20870 | 5.6284 | - |
| 1.1378 | 20880 | 5.6942 | - |
| 1.1384 | 20890 | 5.7372 | - |
| 1.1389 | 20900 | 5.7241 | - |
| 1.1394 | 20910 | 5.7844 | - |
| 1.1400 | 20920 | 5.7742 | - |
| 1.1405 | 20930 | 5.7847 | - |
| 1.1411 | 20940 | 5.6805 | - |
| 1.1416 | 20950 | 5.7357 | - |
| 1.1422 | 20960 | 5.6039 | - |
| 1.1427 | 20970 | 5.8574 | - |
| 1.1433 | 20980 | 5.7365 | - |
| 1.1438 | 20990 | 5.8479 | - |
| 1.1444 | 21000 | 5.5633 | 0.6843 |
| 1.1449 | 21010 | 5.9468 | - |
| 1.1454 | 21020 | 5.7714 | - |
| 1.1460 | 21030 | 5.6738 | - |
| 1.1465 | 21040 | 5.7109 | - |
| 1.1471 | 21050 | 5.7233 | - |
| 1.1476 | 21060 | 6.005 | - |
| 1.1482 | 21070 | 5.9901 | - |
| 1.1487 | 21080 | 5.875 | - |
| 1.1493 | 21090 | 6.1535 | - |
| 1.1498 | 21100 | 5.8092 | - |
| 1.1503 | 21110 | 6.2264 | - |
| 1.1509 | 21120 | 5.963 | - |
| 1.1514 | 21130 | 5.9493 | - |
| 1.1520 | 21140 | 5.7621 | - |
| 1.1525 | 21150 | 5.8869 | - |
| 1.1531 | 21160 | 5.9088 | - |
| 1.1536 | 21170 | 5.9007 | - |
| 1.1542 | 21180 | 5.8338 | - |
| 1.1547 | 21190 | 5.7871 | - |
| 1.1553 | 21200 | 5.6933 | - |
| 1.1558 | 21210 | 5.8236 | - |
| 1.1563 | 21220 | 6.1012 | - |
| 1.1569 | 21230 | 5.7736 | - |
| 1.1574 | 21240 | 6.0647 | - |
| 1.1580 | 21250 | 5.7826 | - |
| 1.1585 | 21260 | 5.9354 | - |
| 1.1591 | 21270 | 5.891 | - |
| 1.1596 | 21280 | 5.973 | - |
| 1.1602 | 21290 | 5.7654 | - |
| 1.1607 | 21300 | 5.8027 | - |
| 1.1612 | 21310 | 5.6247 | - |
| 1.1618 | 21320 | 5.8319 | - |
| 1.1623 | 21330 | 5.7332 | - |
| 1.1629 | 21340 | 5.7287 | - |
| 1.1634 | 21350 | 5.8157 | - |
| 1.1640 | 21360 | 5.9619 | - |
| 1.1645 | 21370 | 5.8651 | - |
| 1.1651 | 21380 | 5.7479 | - |
| 1.1656 | 21390 | 5.7871 | - |
| 1.1661 | 21400 | 5.9504 | - |
| 1.1667 | 21410 | 5.6716 | - |
| 1.1672 | 21420 | 5.6958 | - |
| 1.1678 | 21430 | 5.8896 | - |
| 1.1683 | 21440 | 5.8449 | - |
| 1.1689 | 21450 | 5.9782 | - |
| 1.1694 | 21460 | 5.6729 | - |
| 1.1700 | 21470 | 5.8591 | - |
| 1.1705 | 21480 | 5.7676 | - |
| 1.1711 | 21490 | 5.8541 | - |
| 1.1716 | 21500 | 5.7817 | - |
| 1.1721 | 21510 | 6.0344 | - |
| 1.1727 | 21520 | 5.6762 | - |
| 1.1732 | 21530 | 5.8091 | - |
| 1.1738 | 21540 | 5.7732 | - |
| 1.1743 | 21550 | 5.7379 | - |
| 1.1749 | 21560 | 5.7541 | - |
| 1.1754 | 21570 | 5.5967 | - |
| 1.1760 | 21580 | 5.7935 | - |
| 1.1765 | 21590 | 5.8454 | - |
| 1.1770 | 21600 | 5.872 | - |
| 1.1776 | 21610 | 5.7634 | - |
| 1.1781 | 21620 | 5.7863 | - |
| 1.1787 | 21630 | 5.7747 | - |
| 1.1792 | 21640 | 6.1293 | - |
| 1.1798 | 21650 | 6.0968 | - |
| 1.1803 | 21660 | 6.0418 | - |
| 1.1809 | 21670 | 5.7928 | - |
| 1.1814 | 21680 | 5.8597 | - |
| 1.1820 | 21690 | 5.9231 | - |
| 1.1825 | 21700 | 5.8693 | - |
| 1.1830 | 21710 | 5.5781 | - |
| 1.1836 | 21720 | 5.7456 | - |
| 1.1841 | 21730 | 5.8273 | - |
| 1.1847 | 21740 | 6.0228 | - |
| 1.1852 | 21750 | 5.7938 | - |
| 1.1858 | 21760 | 5.6766 | - |
| 1.1863 | 21770 | 5.9789 | - |
| 1.1869 | 21780 | 5.7303 | - |
| 1.1874 | 21790 | 6.0931 | - |
| 1.1879 | 21800 | 5.6833 | - |
| 1.1885 | 21810 | 5.6975 | - |
| 1.1890 | 21820 | 6.0106 | - |
| 1.1896 | 21830 | 5.7647 | - |
| 1.1901 | 21840 | 5.7917 | - |
| 1.1907 | 21850 | 5.8663 | - |
| 1.1912 | 21860 | 5.8781 | - |
| 1.1918 | 21870 | 6.1547 | - |
| 1.1923 | 21880 | 5.7675 | - |
| 1.1929 | 21890 | 5.7767 | - |
| 1.1934 | 21900 | 5.8643 | - |
| 1.1939 | 21910 | 5.7085 | - |
| 1.1945 | 21920 | 5.8021 | - |
| 1.1950 | 21930 | 5.6933 | - |
| 1.1956 | 21940 | 5.7886 | - |
| 1.1961 | 21950 | 5.6523 | - |
| 1.1967 | 21960 | 5.7623 | - |
| 1.1972 | 21970 | 5.9981 | - |
| 1.1978 | 21980 | 5.573 | - |
| 1.1983 | 21990 | 5.6556 | - |
| 1.1988 | 22000 | 5.5948 | 0.6869 |
| 1.1994 | 22010 | 5.9093 | - |
| 1.1999 | 22020 | 6.0277 | - |
| 1.2005 | 22030 | 5.8165 | - |
| 1.2010 | 22040 | 5.9439 | - |
| 1.2016 | 22050 | 6.0002 | - |
| 1.2021 | 22060 | 5.9601 | - |
| 1.2027 | 22070 | 5.8504 | - |
| 1.2032 | 22080 | 5.7186 | - |
| 1.2037 | 22090 | 5.8011 | - |
| 1.2043 | 22100 | 5.7431 | - |
| 1.2048 | 22110 | 5.8327 | - |
| 1.2054 | 22120 | 5.8064 | - |
| 1.2059 | 22130 | 5.9055 | - |
| 1.2065 | 22140 | 5.8467 | - |
| 1.2070 | 22150 | 5.5217 | - |
| 1.2076 | 22160 | 5.8475 | - |
| 1.2081 | 22170 | 6.01 | - |
| 1.2087 | 22180 | 5.7468 | - |
| 1.2092 | 22190 | 5.7293 | - |
| 1.2097 | 22200 | 5.9531 | - |
| 1.2103 | 22210 | 5.8819 | - |
| 1.2108 | 22220 | 5.8085 | - |
| 1.2114 | 22230 | 5.8983 | - |
| 1.2119 | 22240 | 5.8728 | - |
| 1.2125 | 22250 | 5.7898 | - |
| 1.2130 | 22260 | 5.8714 | - |
| 1.2136 | 22270 | 5.7651 | - |
| 1.2141 | 22280 | 5.8685 | - |
| 1.2146 | 22290 | 5.8161 | - |
| 1.2152 | 22300 | 5.5279 | - |
| 1.2157 | 22310 | 5.706 | - |
| 1.2163 | 22320 | 5.8902 | - |
| 1.2168 | 22330 | 5.8936 | - |
| 1.2174 | 22340 | 5.8184 | - |
| 1.2179 | 22350 | 5.9792 | - |
| 1.2185 | 22360 | 5.8349 | - |
| 1.2190 | 22370 | 5.933 | - |
| 1.2196 | 22380 | 5.8662 | - |
| 1.2201 | 22390 | 5.8858 | - |
| 1.2206 | 22400 | 5.5741 | - |
| 1.2212 | 22410 | 5.7543 | - |
| 1.2217 | 22420 | 6.0338 | - |
| 1.2223 | 22430 | 5.7122 | - |
| 1.2228 | 22440 | 5.926 | - |
| 1.2234 | 22450 | 5.5394 | - |
| 1.2239 | 22460 | 5.9359 | - |
| 1.2245 | 22470 | 5.8143 | - |
| 1.2250 | 22480 | 5.7954 | - |
| 1.2255 | 22490 | 5.7851 | - |
| 1.2261 | 22500 | 5.8309 | - |
| 1.2266 | 22510 | 5.8221 | - |
| 1.2272 | 22520 | 5.7032 | - |
| 1.2277 | 22530 | 5.6778 | - |
| 1.2283 | 22540 | 5.8993 | - |
| 1.2288 | 22550 | 5.8846 | - |
| 1.2294 | 22560 | 5.8893 | - |
| 1.2299 | 22570 | 5.8483 | - |
| 1.2305 | 22580 | 5.7518 | - |
| 1.2310 | 22590 | 6.0793 | - |
| 1.2315 | 22600 | 5.8438 | - |
| 1.2321 | 22610 | 6.0072 | - |
| 1.2326 | 22620 | 5.8513 | - |
| 1.2332 | 22630 | 5.7583 | - |
| 1.2337 | 22640 | 5.6034 | - |
| 1.2343 | 22650 | 5.5299 | - |
| 1.2348 | 22660 | 5.5622 | - |
| 1.2354 | 22670 | 5.7719 | - |
| 1.2359 | 22680 | 5.9686 | - |
| 1.2364 | 22690 | 5.8201 | - |
| 1.2370 | 22700 | 5.9176 | - |
| 1.2375 | 22710 | 5.6818 | - |
| 1.2381 | 22720 | 6.1204 | - |
| 1.2386 | 22730 | 5.8767 | - |
| 1.2392 | 22740 | 5.96 | - |
| 1.2397 | 22750 | 5.7246 | - |
| 1.2403 | 22760 | 5.8098 | - |
| 1.2408 | 22770 | 5.7652 | - |
| 1.2413 | 22780 | 6.0349 | - |
| 1.2419 | 22790 | 5.8066 | - |
| 1.2424 | 22800 | 5.657 | - |
| 1.2430 | 22810 | 5.6859 | - |
| 1.2435 | 22820 | 5.7559 | - |
| 1.2441 | 22830 | 6.027 | - |
| 1.2446 | 22840 | 5.8261 | - |
| 1.2452 | 22850 | 5.9778 | - |
| 1.2457 | 22860 | 5.7815 | - |
| 1.2463 | 22870 | 5.8555 | - |
| 1.2468 | 22880 | 5.7494 | - |
| 1.2473 | 22890 | 5.681 | - |
| 1.2479 | 22900 | 5.9376 | - |
| 1.2484 | 22910 | 5.6899 | - |
| 1.2490 | 22920 | 5.8975 | - |
| 1.2495 | 22930 | 5.8349 | - |
| 1.2501 | 22940 | 5.8471 | - |
| 1.2506 | 22950 | 5.7547 | - |
| 1.2512 | 22960 | 5.8888 | - |
| 1.2517 | 22970 | 5.9409 | - |
| 1.2522 | 22980 | 5.8012 | - |
| 1.2528 | 22990 | 5.8519 | - |
| 1.2533 | 23000 | 5.6479 | 0.6903 |
| 1.2539 | 23010 | 5.7852 | - |
| 1.2544 | 23020 | 5.9057 | - |
| 1.2550 | 23030 | 5.8632 | - |
| 1.2555 | 23040 | 5.8678 | - |
| 1.2561 | 23050 | 5.6034 | - |
| 1.2566 | 23060 | 5.8775 | - |
| 1.2572 | 23070 | 6.04 | - |
| 1.2577 | 23080 | 5.9253 | - |
| 1.2582 | 23090 | 5.5547 | - |
| 1.2588 | 23100 | 5.8907 | - |
| 1.2593 | 23110 | 5.7453 | - |
| 1.2599 | 23120 | 5.8673 | - |
| 1.2604 | 23130 | 5.7412 | - |
| 1.2610 | 23140 | 5.9789 | - |
| 1.2615 | 23150 | 5.7082 | - |
| 1.2621 | 23160 | 5.6948 | - |
| 1.2626 | 23170 | 5.836 | - |
| 1.2631 | 23180 | 5.6322 | - |
| 1.2637 | 23190 | 5.8009 | - |
| 1.2642 | 23200 | 5.5292 | - |
| 1.2648 | 23210 | 5.6325 | - |
| 1.2653 | 23220 | 5.5528 | - |
| 1.2659 | 23230 | 5.7025 | - |
| 1.2664 | 23240 | 6.0385 | - |
| 1.2670 | 23250 | 5.8844 | - |
| 1.2675 | 23260 | 5.9915 | - |
| 1.2681 | 23270 | 5.7288 | - |
| 1.2686 | 23280 | 6.0285 | - |
| 1.2691 | 23290 | 5.7188 | - |
| 1.2697 | 23300 | 5.7104 | - |
| 1.2702 | 23310 | 5.9119 | - |
| 1.2708 | 23320 | 5.8162 | - |
| 1.2713 | 23330 | 5.7296 | - |
| 1.2719 | 23340 | 5.9956 | - |
| 1.2724 | 23350 | 5.7818 | - |
| 1.2730 | 23360 | 5.8438 | - |
| 1.2735 | 23370 | 6.0184 | - |
| 1.2740 | 23380 | 5.7828 | - |
| 1.2746 | 23390 | 5.8083 | - |
| 1.2751 | 23400 | 5.6847 | - |
| 1.2757 | 23410 | 5.7323 | - |
| 1.2762 | 23420 | 5.7453 | - |
| 1.2768 | 23430 | 5.8459 | - |
| 1.2773 | 23440 | 5.7308 | - |
| 1.2779 | 23450 | 5.8836 | - |
| 1.2784 | 23460 | 5.8379 | - |
| 1.2789 | 23470 | 5.7605 | - |
| 1.2795 | 23480 | 5.9098 | - |
| 1.2800 | 23490 | 5.646 | - |
| 1.2806 | 23500 | 5.7181 | - |
| 1.2811 | 23510 | 5.6943 | - |
| 1.2817 | 23520 | 5.7 | - |
| 1.2822 | 23530 | 5.7006 | - |
| 1.2828 | 23540 | 5.613 | - |
| 1.2833 | 23550 | 5.6963 | - |
| 1.2839 | 23560 | 6.0629 | - |
| 1.2844 | 23570 | 5.7014 | - |
| 1.2849 | 23580 | 5.7237 | - |
| 1.2855 | 23590 | 5.8864 | - |
| 1.2860 | 23600 | 5.9241 | - |
| 1.2866 | 23610 | 5.8934 | - |
| 1.2871 | 23620 | 5.8477 | - |
| 1.2877 | 23630 | 5.6645 | - |
| 1.2882 | 23640 | 5.6035 | - |
| 1.2888 | 23650 | 5.8181 | - |
| 1.2893 | 23660 | 5.5162 | - |
| 1.2898 | 23670 | 5.7738 | - |
| 1.2904 | 23680 | 5.5964 | - |
| 1.2909 | 23690 | 5.8502 | - |
| 1.2915 | 23700 | 5.7058 | - |
| 1.2920 | 23710 | 5.9347 | - |
| 1.2926 | 23720 | 5.8968 | - |
| 1.2931 | 23730 | 5.6325 | - |
| 1.2937 | 23740 | 5.7037 | - |
| 1.2942 | 23750 | 5.8395 | - |
| 1.2948 | 23760 | 5.6966 | - |
| 1.2953 | 23770 | 5.9137 | - |
| 1.2958 | 23780 | 5.6377 | - |
| 1.2964 | 23790 | 5.7154 | - |
| 1.2969 | 23800 | 5.612 | - |
| 1.2975 | 23810 | 5.7961 | - |
| 1.2980 | 23820 | 5.7571 | - |
| 1.2986 | 23830 | 5.8376 | - |
| 1.2991 | 23840 | 5.8119 | - |
| 1.2997 | 23850 | 5.7929 | - |
| 1.3002 | 23860 | 5.9724 | - |
| 1.3007 | 23870 | 5.8234 | - |
| 1.3013 | 23880 | 5.8878 | - |
| 1.3018 | 23890 | 6.0561 | - |
| 1.3024 | 23900 | 5.8548 | - |
| 1.3029 | 23910 | 5.7705 | - |
| 1.3035 | 23920 | 5.5812 | - |
| 1.3040 | 23930 | 5.9343 | - |
| 1.3046 | 23940 | 5.627 | - |
| 1.3051 | 23950 | 5.7234 | - |
| 1.3057 | 23960 | 5.8396 | - |
| 1.3062 | 23970 | 5.6981 | - |
| 1.3067 | 23980 | 5.8679 | - |
| 1.3073 | 23990 | 5.5181 | - |
| 1.3078 | 24000 | 5.7786 | 0.6852 |
| 1.3084 | 24010 | 5.9155 | - |
| 1.3089 | 24020 | 5.9482 | - |
| 1.3095 | 24030 | 5.7341 | - |
| 1.3100 | 24040 | 5.7581 | - |
| 1.3106 | 24050 | 5.7291 | - |
| 1.3111 | 24060 | 6.014 | - |
| 1.3116 | 24070 | 5.9162 | - |
| 1.3122 | 24080 | 5.701 | - |
| 1.3127 | 24090 | 5.6924 | - |
| 1.3133 | 24100 | 5.6624 | - |
| 1.3138 | 24110 | 5.6339 | - |
| 1.3144 | 24120 | 5.6643 | - |
| 1.3149 | 24130 | 5.6189 | - |
| 1.3155 | 24140 | 5.9921 | - |
| 1.3160 | 24150 | 5.5462 | - |
| 1.3165 | 24160 | 6.0507 | - |
| 1.3171 | 24170 | 5.9267 | - |
| 1.3176 | 24180 | 5.9007 | - |
| 1.3182 | 24190 | 5.8645 | - |
| 1.3187 | 24200 | 5.5921 | - |
| 1.3193 | 24210 | 5.9193 | - |
| 1.3198 | 24220 | 5.5943 | - |
| 1.3204 | 24230 | 5.9227 | - |
| 1.3209 | 24240 | 5.6919 | - |
| 1.3215 | 24250 | 5.6083 | - |
| 1.3220 | 24260 | 5.5775 | - |
| 1.3225 | 24270 | 5.7889 | - |
| 1.3231 | 24280 | 5.8176 | - |
| 1.3236 | 24290 | 5.4795 | - |
| 1.3242 | 24300 | 6.0121 | - |
| 1.3247 | 24310 | 5.9449 | - |
| 1.3253 | 24320 | 5.7774 | - |
| 1.3258 | 24330 | 5.8281 | - |
| 1.3264 | 24340 | 5.9186 | - |
| 1.3269 | 24350 | 5.7832 | - |
| 1.3274 | 24360 | 5.8047 | - |
| 1.3280 | 24370 | 5.9888 | - |
| 1.3285 | 24380 | 5.8924 | - |
| 1.3291 | 24390 | 5.4532 | - |
| 1.3296 | 24400 | 6.2111 | - |
| 1.3302 | 24410 | 5.7632 | - |
| 1.3307 | 24420 | 5.7949 | - |
| 1.3313 | 24430 | 5.7567 | - |
| 1.3318 | 24440 | 5.7828 | - |
| 1.3324 | 24450 | 5.7185 | - |
| 1.3329 | 24460 | 5.7316 | - |
| 1.3334 | 24470 | 5.6404 | - |
| 1.3340 | 24480 | 5.5988 | - |
| 1.3345 | 24490 | 5.5632 | - |
| 1.3351 | 24500 | 5.8433 | - |
| 1.3356 | 24510 | 5.6993 | - |
| 1.3362 | 24520 | 5.5907 | - |
| 1.3367 | 24530 | 5.823 | - |
| 1.3373 | 24540 | 5.9377 | - |
| 1.3378 | 24550 | 5.6635 | - |
| 1.3383 | 24560 | 5.5731 | - |
| 1.3389 | 24570 | 5.7503 | - |
| 1.3394 | 24580 | 5.9547 | - |
| 1.3400 | 24590 | 5.7483 | - |
| 1.3405 | 24600 | 5.7456 | - |
| 1.3411 | 24610 | 5.5938 | - |
| 1.3416 | 24620 | 5.607 | - |
| 1.3422 | 24630 | 5.5716 | - |
| 1.3427 | 24640 | 5.4855 | - |
| 1.3433 | 24650 | 5.7349 | - |
| 1.3438 | 24660 | 5.953 | - |
| 1.3443 | 24670 | 5.9246 | - |
| 1.3449 | 24680 | 5.7512 | - |
| 1.3454 | 24690 | 5.7655 | - |
| 1.3460 | 24700 | 5.6331 | - |
| 1.3465 | 24710 | 5.7734 | - |
| 1.3471 | 24720 | 5.5061 | - |
| 1.3476 | 24730 | 5.9164 | - |
| 1.3482 | 24740 | 5.9115 | - |
| 1.3487 | 24750 | 5.9232 | - |
| 1.3492 | 24760 | 5.616 | - |
| 1.3498 | 24770 | 5.6697 | - |
| 1.3503 | 24780 | 6.0127 | - |
| 1.3509 | 24790 | 5.7706 | - |
| 1.3514 | 24800 | 5.6955 | - |
| 1.3520 | 24810 | 5.8807 | - |
| 1.3525 | 24820 | 5.7052 | - |
| 1.3531 | 24830 | 5.7575 | - |
| 1.3536 | 24840 | 5.8471 | - |
| 1.3541 | 24850 | 5.838 | - |
| 1.3547 | 24860 | 5.5788 | - |
| 1.3552 | 24870 | 5.791 | - |
| 1.3558 | 24880 | 5.543 | - |
| 1.3563 | 24890 | 6.0196 | - |
| 1.3569 | 24900 | 5.6992 | - |
| 1.3574 | 24910 | 5.5764 | - |
| 1.3580 | 24920 | 5.6985 | - |
| 1.3585 | 24930 | 5.7727 | - |
| 1.3591 | 24940 | 5.5765 | - |
| 1.3596 | 24950 | 5.9706 | - |
| 1.3601 | 24960 | 5.8801 | - |
| 1.3607 | 24970 | 5.7036 | - |
| 1.3612 | 24980 | 5.73 | - |
| 1.3618 | 24990 | 5.7313 | - |
| 1.3623 | 25000 | 5.6068 | 0.6818 |
| 1.3629 | 25010 | 5.7446 | - |
| 1.3634 | 25020 | 5.5888 | - |
| 1.3640 | 25030 | 5.7245 | - |
| 1.3645 | 25040 | 5.7484 | - |
| 1.3650 | 25050 | 5.8132 | - |
| 1.3656 | 25060 | 5.764 | - |
| 1.3661 | 25070 | 5.7473 | - |
| 1.3667 | 25080 | 5.6199 | - |
| 1.3672 | 25090 | 5.5347 | - |
| 1.3678 | 25100 | 5.6777 | - |
| 1.3683 | 25110 | 5.8948 | - |
| 1.3689 | 25120 | 5.8468 | - |
| 1.3694 | 25130 | 5.5211 | - |
| 1.3700 | 25140 | 5.594 | - |
| 1.3705 | 25150 | 5.8264 | - |
| 1.3710 | 25160 | 5.5385 | - |
| 1.3716 | 25170 | 5.6067 | - |
| 1.3721 | 25180 | 5.7562 | - |
| 1.3727 | 25190 | 5.6213 | - |
| 1.3732 | 25200 | 5.6521 | - |
| 1.3738 | 25210 | 5.6575 | - |
| 1.3743 | 25220 | 5.6089 | - |
| 1.3749 | 25230 | 5.4729 | - |
| 1.3754 | 25240 | 5.763 | - |
| 1.3759 | 25250 | 5.9475 | - |
| 1.3765 | 25260 | 5.8515 | - |
| 1.3770 | 25270 | 5.7369 | - |
| 1.3776 | 25280 | 5.7088 | - |
| 1.3781 | 25290 | 5.9382 | - |
| 1.3787 | 25300 | 5.5245 | - |
| 1.3792 | 25310 | 5.6669 | - |
| 1.3798 | 25320 | 5.7075 | - |
| 1.3803 | 25330 | 5.9021 | - |
| 1.3809 | 25340 | 5.6129 | - |
| 1.3814 | 25350 | 5.8234 | - |
| 1.3819 | 25360 | 5.7876 | - |
| 1.3825 | 25370 | 5.9276 | - |
| 1.3830 | 25380 | 6.0568 | - |
| 1.3836 | 25390 | 5.8147 | - |
| 1.3841 | 25400 | 5.7909 | - |
| 1.3847 | 25410 | 5.6609 | - |
| 1.3852 | 25420 | 5.6419 | - |
| 1.3858 | 25430 | 5.9034 | - |
| 1.3863 | 25440 | 5.831 | - |
| 1.3868 | 25450 | 5.7195 | - |
| 1.3874 | 25460 | 5.9654 | - |
| 1.3879 | 25470 | 5.8812 | - |
| 1.3885 | 25480 | 5.8554 | - |
| 1.3890 | 25490 | 5.4667 | - |
| 1.3896 | 25500 | 5.6968 | - |
| 1.3901 | 25510 | 5.8943 | - |
| 1.3907 | 25520 | 5.833 | - |
| 1.3912 | 25530 | 5.4091 | - |
| 1.3917 | 25540 | 5.7771 | - |
| 1.3923 | 25550 | 5.7363 | - |
| 1.3928 | 25560 | 5.7518 | - |
| 1.3934 | 25570 | 5.4856 | - |
| 1.3939 | 25580 | 5.7251 | - |
| 1.3945 | 25590 | 5.6038 | - |
| 1.3950 | 25600 | 5.787 | - |
| 1.3956 | 25610 | 5.6681 | - |
| 1.3961 | 25620 | 5.6053 | - |
| 1.3967 | 25630 | 5.6908 | - |
| 1.3972 | 25640 | 5.6322 | - |
| 1.3977 | 25650 | 5.8106 | - |
| 1.3983 | 25660 | 5.6728 | - |
| 1.3988 | 25670 | 5.9984 | - |
| 1.3994 | 25680 | 5.6208 | - |
| 1.3999 | 25690 | 5.8389 | - |
| 1.4005 | 25700 | 5.6312 | - |
| 1.4010 | 25710 | 5.6432 | - |
| 1.4016 | 25720 | 5.558 | - |
| 1.4021 | 25730 | 5.815 | - |
| 1.4026 | 25740 | 5.7772 | - |
| 1.4032 | 25750 | 5.7902 | - |
| 1.4037 | 25760 | 5.6028 | - |
| 1.4043 | 25770 | 5.4786 | - |
| 1.4048 | 25780 | 5.7688 | - |
| 1.4054 | 25790 | 5.7262 | - |
| 1.4059 | 25800 | 5.6303 | - |
| 1.4065 | 25810 | 5.726 | - |
| 1.4070 | 25820 | 5.9654 | - |
| 1.4076 | 25830 | 5.8404 | - |
| 1.4081 | 25840 | 5.7563 | - |
| 1.4086 | 25850 | 5.8076 | - |
| 1.4092 | 25860 | 5.977 | - |
| 1.4097 | 25870 | 5.9062 | - |
| 1.4103 | 25880 | 5.6077 | - |
| 1.4108 | 25890 | 5.9347 | - |
| 1.4114 | 25900 | 5.7915 | - |
| 1.4119 | 25910 | 5.603 | - |
| 1.4125 | 25920 | 5.6877 | - |
| 1.4130 | 25930 | 5.682 | - |
| 1.4135 | 25940 | 5.9428 | - |
| 1.4141 | 25950 | 5.6472 | - |
| 1.4146 | 25960 | 5.6476 | - |
| 1.4152 | 25970 | 5.5319 | - |
| 1.4157 | 25980 | 5.9134 | - |
| 1.4163 | 25990 | 5.8328 | - |
| 1.4168 | 26000 | 5.784 | 0.6917 |
| 1.4174 | 26010 | 5.8445 | - |
| 1.4179 | 26020 | 5.7712 | - |
| 1.4185 | 26030 | 5.5298 | - |
| 1.4190 | 26040 | 5.5741 | - |
| 1.4195 | 26050 | 5.7802 | - |
| 1.4201 | 26060 | 5.7837 | - |
| 1.4206 | 26070 | 5.7675 | - |
| 1.4212 | 26080 | 5.9169 | - |
| 1.4217 | 26090 | 5.9055 | - |
| 1.4223 | 26100 | 5.751 | - |
| 1.4228 | 26110 | 5.7731 | - |
| 1.4234 | 26120 | 5.678 | - |
| 1.4239 | 26130 | 5.5623 | - |
| 1.4244 | 26140 | 5.7847 | - |
| 1.4250 | 26150 | 5.7321 | - |
| 1.4255 | 26160 | 5.6506 | - |
| 1.4261 | 26170 | 6.0169 | - |
| 1.4266 | 26180 | 5.6233 | - |
| 1.4272 | 26190 | 5.8072 | - |
| 1.4277 | 26200 | 5.6276 | - |
| 1.4283 | 26210 | 5.6786 | - |
| 1.4288 | 26220 | 5.7456 | - |
| 1.4293 | 26230 | 5.9566 | - |
| 1.4299 | 26240 | 5.4184 | - |
| 1.4304 | 26250 | 5.846 | - |
| 1.4310 | 26260 | 5.7348 | - |
| 1.4315 | 26270 | 5.7032 | - |
| 1.4321 | 26280 | 5.9746 | - |
| 1.4326 | 26290 | 5.9141 | - |
| 1.4332 | 26300 | 5.7215 | - |
| 1.4337 | 26310 | 5.8033 | - |
| 1.4343 | 26320 | 5.7762 | - |
| 1.4348 | 26330 | 5.6933 | - |
| 1.4353 | 26340 | 5.7382 | - |
| 1.4359 | 26350 | 5.7046 | - |
| 1.4364 | 26360 | 5.876 | - |
| 1.4370 | 26370 | 5.6143 | - |
| 1.4375 | 26380 | 5.765 | - |
| 1.4381 | 26390 | 5.8143 | - |
| 1.4386 | 26400 | 5.8085 | - |
| 1.4392 | 26410 | 5.6571 | - |
| 1.4397 | 26420 | 5.8183 | - |
| 1.4402 | 26430 | 5.8964 | - |
| 1.4408 | 26440 | 5.9685 | - |
| 1.4413 | 26450 | 5.4287 | - |
| 1.4419 | 26460 | 5.6642 | - |
| 1.4424 | 26470 | 5.9389 | - |
| 1.4430 | 26480 | 5.8526 | - |
| 1.4435 | 26490 | 5.8341 | - |
| 1.4441 | 26500 | 5.8011 | - |
| 1.4446 | 26510 | 5.9021 | - |
| 1.4452 | 26520 | 5.6486 | - |
| 1.4457 | 26530 | 5.968 | - |
| 1.4462 | 26540 | 5.8615 | - |
| 1.4468 | 26550 | 5.6659 | - |
| 1.4473 | 26560 | 5.8455 | - |
| 1.4479 | 26570 | 5.7759 | - |
| 1.4484 | 26580 | 5.7436 | - |
| 1.4490 | 26590 | 5.7917 | - |
| 1.4495 | 26600 | 5.5938 | - |
| 1.4501 | 26610 | 5.6709 | - |
| 1.4506 | 26620 | 5.6474 | - |
| 1.4511 | 26630 | 5.9278 | - |
| 1.4517 | 26640 | 5.7046 | - |
| 1.4522 | 26650 | 5.7623 | - |
| 1.4528 | 26660 | 5.7512 | - |
| 1.4533 | 26670 | 5.6979 | - |
| 1.4539 | 26680 | 5.8038 | - |
| 1.4544 | 26690 | 5.5639 | - |
| 1.4550 | 26700 | 5.9642 | - |
| 1.4555 | 26710 | 5.603 | - |
| 1.4561 | 26720 | 5.5525 | - |
| 1.4566 | 26730 | 5.7485 | - |
| 1.4571 | 26740 | 5.9481 | - |
| 1.4577 | 26750 | 5.6499 | - |
| 1.4582 | 26760 | 5.7689 | - |
| 1.4588 | 26770 | 5.7799 | - |
| 1.4593 | 26780 | 5.719 | - |
| 1.4599 | 26790 | 5.7361 | - |
| 1.4604 | 26800 | 5.6689 | - |
| 1.4610 | 26810 | 5.6166 | - |
| 1.4615 | 26820 | 6.0314 | - |
| 1.4620 | 26830 | 5.8525 | - |
| 1.4626 | 26840 | 5.7295 | - |
| 1.4631 | 26850 | 5.7052 | - |
| 1.4637 | 26860 | 5.7744 | - |
| 1.4642 | 26870 | 5.886 | - |
| 1.4648 | 26880 | 5.5467 | - |
| 1.4653 | 26890 | 5.7542 | - |
| 1.4659 | 26900 | 5.8413 | - |
| 1.4664 | 26910 | 5.6008 | - |
| 1.4670 | 26920 | 5.6598 | - |
| 1.4675 | 26930 | 5.7626 | - |
| 1.4680 | 26940 | 5.7474 | - |
| 1.4686 | 26950 | 5.7576 | - |
| 1.4691 | 26960 | 5.6866 | - |
| 1.4697 | 26970 | 5.6913 | - |
| 1.4702 | 26980 | 5.7498 | - |
| 1.4708 | 26990 | 5.7821 | - |
| 1.4713 | 27000 | 5.4373 | 0.6843 |
| 1.4719 | 27010 | 5.7529 | - |
| 1.4724 | 27020 | 5.6361 | - |
| 1.4729 | 27030 | 5.786 | - |
| 1.4735 | 27040 | 5.6261 | - |
| 1.4740 | 27050 | 5.6728 | - |
| 1.4746 | 27060 | 5.6724 | - |
| 1.4751 | 27070 | 5.8307 | - |
| 1.4757 | 27080 | 5.6634 | - |
| 1.4762 | 27090 | 5.7272 | - |
| 1.4768 | 27100 | 5.6305 | - |
| 1.4773 | 27110 | 5.8866 | - |
| 1.4778 | 27120 | 5.8328 | - |
| 1.4784 | 27130 | 5.7593 | - |
| 1.4789 | 27140 | 5.9639 | - |
| 1.4795 | 27150 | 5.6618 | - |
| 1.4800 | 27160 | 5.8943 | - |
| 1.4806 | 27170 | 5.454 | - |
| 1.4811 | 27180 | 5.8424 | - |
| 1.4817 | 27190 | 5.8821 | - |
| 1.4822 | 27200 | 5.8669 | - |
| 1.4828 | 27210 | 5.9276 | - |
| 1.4833 | 27220 | 5.6697 | - |
| 1.4838 | 27230 | 5.6345 | - |
| 1.4844 | 27240 | 5.6788 | - |
| 1.4849 | 27250 | 5.8049 | - |
| 1.4855 | 27260 | 5.6838 | - |
| 1.4860 | 27270 | 5.8305 | - |
| 1.4866 | 27280 | 5.5943 | - |
| 1.4871 | 27290 | 5.6068 | - |
| 1.4877 | 27300 | 5.8309 | - |
| 1.4882 | 27310 | 5.6879 | - |
| 1.4887 | 27320 | 5.6167 | - |
| 1.4893 | 27330 | 5.6702 | - |
| 1.4898 | 27340 | 5.6338 | - |
| 1.4904 | 27350 | 5.6434 | - |
| 1.4909 | 27360 | 5.6381 | - |
| 1.4915 | 27370 | 5.8862 | - |
| 1.4920 | 27380 | 5.859 | - |
| 1.4926 | 27390 | 5.7937 | - |
| 1.4931 | 27400 | 5.6266 | - |
| 1.4937 | 27410 | 5.6649 | - |
| 1.4942 | 27420 | 5.6748 | - |
| 1.4947 | 27430 | 5.583 | - |
| 1.4953 | 27440 | 5.578 | - |
| 1.4958 | 27450 | 5.7206 | - |
| 1.4964 | 27460 | 5.6704 | - |
| 1.4969 | 27470 | 5.5876 | - |
| 1.4975 | 27480 | 5.9325 | - |
| 1.4980 | 27490 | 5.7228 | - |
| 1.4986 | 27500 | 5.7484 | - |
| 1.4991 | 27510 | 5.9966 | - |
| 1.4996 | 27520 | 5.8698 | - |
| 1.5002 | 27530 | 5.6172 | - |
| 1.5007 | 27540 | 5.7012 | - |
| 1.5013 | 27550 | 5.6023 | - |
| 1.5018 | 27560 | 5.8695 | - |
| 1.5024 | 27570 | 5.6646 | - |
| 1.5029 | 27580 | 5.7254 | - |
| 1.5035 | 27590 | 5.6258 | - |
| 1.5040 | 27600 | 5.7096 | - |
| 1.5046 | 27610 | 5.6903 | - |
| 1.5051 | 27620 | 5.9145 | - |
| 1.5056 | 27630 | 5.836 | - |
| 1.5062 | 27640 | 5.6039 | - |
| 1.5067 | 27650 | 5.6499 | - |
| 1.5073 | 27660 | 5.8133 | - |
| 1.5078 | 27670 | 5.6713 | - |
| 1.5084 | 27680 | 5.6829 | - |
| 1.5089 | 27690 | 6.0312 | - |
| 1.5095 | 27700 | 6.0209 | - |
| 1.5100 | 27710 | 5.5031 | - |
| 1.5105 | 27720 | 5.7338 | - |
| 1.5111 | 27730 | 5.8609 | - |
| 1.5116 | 27740 | 5.6247 | - |
| 1.5122 | 27750 | 5.6008 | - |
| 1.5127 | 27760 | 5.6928 | - |
| 1.5133 | 27770 | 5.6246 | - |
| 1.5138 | 27780 | 5.5249 | - |
| 1.5144 | 27790 | 5.521 | - |
| 1.5149 | 27800 | 5.876 | - |
| 1.5154 | 27810 | 5.6187 | - |
| 1.5160 | 27820 | 5.8295 | - |
| 1.5165 | 27830 | 5.7417 | - |
| 1.5171 | 27840 | 5.7706 | - |
| 1.5176 | 27850 | 5.8193 | - |
| 1.5182 | 27860 | 5.8925 | - |
| 1.5187 | 27870 | 5.8622 | - |
| 1.5193 | 27880 | 5.4676 | - |
| 1.5198 | 27890 | 5.5733 | - |
| 1.5204 | 27900 | 5.849 | - |
| 1.5209 | 27910 | 5.6352 | - |
| 1.5214 | 27920 | 5.7934 | - |
| 1.5220 | 27930 | 5.9044 | - |
| 1.5225 | 27940 | 5.8376 | - |
| 1.5231 | 27950 | 5.7269 | - |
| 1.5236 | 27960 | 5.8983 | - |
| 1.5242 | 27970 | 5.6945 | - |
| 1.5247 | 27980 | 5.6583 | - |
| 1.5253 | 27990 | 5.5896 | - |
| 1.5258 | 28000 | 5.8294 | 0.6898 |
| 1.5263 | 28010 | 5.5884 | - |
| 1.5269 | 28020 | 5.6269 | - |
| 1.5274 | 28030 | 5.5775 | - |
| 1.5280 | 28040 | 5.7872 | - |
| 1.5285 | 28050 | 5.8223 | - |
| 1.5291 | 28060 | 5.963 | - |
| 1.5296 | 28070 | 5.6822 | - |
| 1.5302 | 28080 | 5.5723 | - |
| 1.5307 | 28090 | 5.8075 | - |
| 1.5313 | 28100 | 5.6271 | - |
| 1.5318 | 28110 | 5.7512 | - |
| 1.5323 | 28120 | 5.699 | - |
| 1.5329 | 28130 | 5.7096 | - |
| 1.5334 | 28140 | 5.7828 | - |
| 1.5340 | 28150 | 5.7964 | - |
| 1.5345 | 28160 | 5.8161 | - |
| 1.5351 | 28170 | 5.6993 | - |
| 1.5356 | 28180 | 5.9063 | - |
| 1.5362 | 28190 | 5.8882 | - |
| 1.5367 | 28200 | 5.779 | - |
| 1.5372 | 28210 | 5.7142 | - |
| 1.5378 | 28220 | 5.5176 | - |
| 1.5383 | 28230 | 5.9063 | - |
| 1.5389 | 28240 | 5.4376 | - |
| 1.5394 | 28250 | 5.7092 | - |
| 1.5400 | 28260 | 5.6393 | - |
| 1.5405 | 28270 | 5.8013 | - |
| 1.5411 | 28280 | 5.4832 | - |
| 1.5416 | 28290 | 5.5137 | - |
| 1.5422 | 28300 | 5.7803 | - |
| 1.5427 | 28310 | 5.5789 | - |
| 1.5432 | 28320 | 5.8604 | - |
| 1.5438 | 28330 | 5.6624 | - |
| 1.5443 | 28340 | 5.6247 | - |
| 1.5449 | 28350 | 5.762 | - |
| 1.5454 | 28360 | 5.5175 | - |
| 1.5460 | 28370 | 5.8604 | - |
| 1.5465 | 28380 | 5.5712 | - |
| 1.5471 | 28390 | 5.6886 | - |
| 1.5476 | 28400 | 5.8446 | - |
| 1.5481 | 28410 | 5.6062 | - |
| 1.5487 | 28420 | 5.7634 | - |
| 1.5492 | 28430 | 5.8559 | - |
| 1.5498 | 28440 | 5.825 | - |
| 1.5503 | 28450 | 5.8016 | - |
| 1.5509 | 28460 | 5.6805 | - |
| 1.5514 | 28470 | 5.7307 | - |
| 1.5520 | 28480 | 5.4915 | - |
| 1.5525 | 28490 | 5.891 | - |
| 1.5530 | 28500 | 5.8151 | - |
| 1.5536 | 28510 | 5.8613 | - |
| 1.5541 | 28520 | 5.5971 | - |
| 1.5547 | 28530 | 5.811 | - |
| 1.5552 | 28540 | 5.9088 | - |
| 1.5558 | 28550 | 5.7241 | - |
| 1.5563 | 28560 | 5.8253 | - |
| 1.5569 | 28570 | 5.7649 | - |
| 1.5574 | 28580 | 5.5884 | - |
| 1.5580 | 28590 | 5.5628 | - |
| 1.5585 | 28600 | 5.7621 | - |
| 1.5590 | 28610 | 5.7337 | - |
| 1.5596 | 28620 | 5.7539 | - |
| 1.5601 | 28630 | 5.5748 | - |
| 1.5607 | 28640 | 5.8434 | - |
| 1.5612 | 28650 | 5.6682 | - |
| 1.5618 | 28660 | 5.5971 | - |
| 1.5623 | 28670 | 5.7831 | - |
| 1.5629 | 28680 | 5.7958 | - |
| 1.5634 | 28690 | 5.806 | - |
| 1.5639 | 28700 | 5.6167 | - |
| 1.5645 | 28710 | 5.6638 | - |
| 1.5650 | 28720 | 5.5977 | - |
| 1.5656 | 28730 | 5.6272 | - |
| 1.5661 | 28740 | 5.8217 | - |
| 1.5667 | 28750 | 5.8021 | - |
| 1.5672 | 28760 | 5.7138 | - |
| 1.5678 | 28770 | 5.7072 | - |
| 1.5683 | 28780 | 5.7686 | - |
| 1.5689 | 28790 | 5.8178 | - |
| 1.5694 | 28800 | 5.6493 | - |
| 1.5699 | 28810 | 5.6889 | - |
| 1.5705 | 28820 | 6.0286 | - |
| 1.5710 | 28830 | 5.6949 | - |
| 1.5716 | 28840 | 5.5997 | - |
| 1.5721 | 28850 | 5.8566 | - |
| 1.5727 | 28860 | 5.4997 | - |
| 1.5732 | 28870 | 5.737 | - |
| 1.5738 | 28880 | 5.7486 | - |
| 1.5743 | 28890 | 5.733 | - |
| 1.5748 | 28900 | 5.904 | - |
| 1.5754 | 28910 | 5.9334 | - |
| 1.5759 | 28920 | 5.6105 | - |
| 1.5765 | 28930 | 5.5922 | - |
| 1.5770 | 28940 | 5.5835 | - |
| 1.5776 | 28950 | 5.5292 | - |
| 1.5781 | 28960 | 5.7168 | - |
| 1.5787 | 28970 | 5.6137 | - |
| 1.5792 | 28980 | 5.9261 | - |
| 1.5798 | 28990 | 5.6886 | - |
| 1.5803 | 29000 | 5.4618 | 0.6872 |
| 1.5808 | 29010 | 5.8104 | - |
| 1.5814 | 29020 | 5.4392 | - |
| 1.5819 | 29030 | 5.6948 | - |
| 1.5825 | 29040 | 5.7136 | - |
| 1.5830 | 29050 | 5.8202 | - |
| 1.5836 | 29060 | 5.6363 | - |
| 1.5841 | 29070 | 5.8318 | - |
| 1.5847 | 29080 | 5.7195 | - |
| 1.5852 | 29090 | 5.6166 | - |
| 1.5857 | 29100 | 5.7663 | - |
| 1.5863 | 29110 | 5.8111 | - |
| 1.5868 | 29120 | 5.6747 | - |
| 1.5874 | 29130 | 5.952 | - |
| 1.5879 | 29140 | 5.5404 | - |
| 1.5885 | 29150 | 5.7981 | - |
| 1.5890 | 29160 | 5.608 | - |
| 1.5896 | 29170 | 5.6981 | - |
| 1.5901 | 29180 | 5.6309 | - |
| 1.5906 | 29190 | 5.5746 | - |
| 1.5912 | 29200 | 5.753 | - |
| 1.5917 | 29210 | 5.8551 | - |
| 1.5923 | 29220 | 5.7852 | - |
| 1.5928 | 29230 | 6.0341 | - |
| 1.5934 | 29240 | 5.4789 | - |
| 1.5939 | 29250 | 5.7336 | - |
| 1.5945 | 29260 | 5.39 | - |
| 1.5950 | 29270 | 5.6543 | - |
| 1.5956 | 29280 | 5.8646 | - |
| 1.5961 | 29290 | 5.7199 | - |
| 1.5966 | 29300 | 5.7251 | - |
| 1.5972 | 29310 | 5.8654 | - |
| 1.5977 | 29320 | 5.5531 | - |
| 1.5983 | 29330 | 5.5082 | - |
| 1.5988 | 29340 | 5.6214 | - |
| 1.5994 | 29350 | 5.5853 | - |
| 1.5999 | 29360 | 5.6374 | - |
| 1.6005 | 29370 | 5.7041 | - |
| 1.6010 | 29380 | 5.4969 | - |
| 1.6015 | 29390 | 6.0864 | - |
| 1.6021 | 29400 | 5.674 | - |
| 1.6026 | 29410 | 5.879 | - |
| 1.6032 | 29420 | 5.7565 | - |
| 1.6037 | 29430 | 5.5711 | - |
| 1.6043 | 29440 | 5.8751 | - |
| 1.6048 | 29450 | 5.6506 | - |
| 1.6054 | 29460 | 5.9819 | - |
| 1.6059 | 29470 | 5.6114 | - |
| 1.6065 | 29480 | 5.7735 | - |
| 1.6070 | 29490 | 5.8167 | - |
| 1.6075 | 29500 | 5.6407 | - |
| 1.6081 | 29510 | 5.6858 | - |
| 1.6086 | 29520 | 5.5328 | - |
| 1.6092 | 29530 | 5.8151 | - |
| 1.6097 | 29540 | 5.8349 | - |
| 1.6103 | 29550 | 5.6624 | - |
| 1.6108 | 29560 | 5.6794 | - |
| 1.6114 | 29570 | 5.6617 | - |
| 1.6119 | 29580 | 5.7848 | - |
| 1.6124 | 29590 | 5.4632 | - |
| 1.6130 | 29600 | 5.6316 | - |
| 1.6135 | 29610 | 5.6277 | - |
| 1.6141 | 29620 | 5.7967 | - |
| 1.6146 | 29630 | 5.698 | - |
| 1.6152 | 29640 | 5.5885 | - |
| 1.6157 | 29650 | 5.8475 | - |
| 1.6163 | 29660 | 5.5861 | - |
| 1.6168 | 29670 | 5.5601 | - |
| 1.6174 | 29680 | 5.967 | - |
| 1.6179 | 29690 | 5.7774 | - |
| 1.6184 | 29700 | 5.8351 | - |
| 1.6190 | 29710 | 5.9808 | - |
| 1.6195 | 29720 | 5.6456 | - |
| 1.6201 | 29730 | 5.8979 | - |
| 1.6206 | 29740 | 5.8837 | - |
| 1.6212 | 29750 | 5.6975 | - |
| 1.6217 | 29760 | 5.8229 | - |
| 1.6223 | 29770 | 5.6602 | - |
| 1.6228 | 29780 | 5.9842 | - |
| 1.6233 | 29790 | 5.943 | - |
| 1.6239 | 29800 | 5.8054 | - |
| 1.6244 | 29810 | 5.595 | - |
| 1.6250 | 29820 | 5.5738 | - |
| 1.6255 | 29830 | 5.6377 | - |
| 1.6261 | 29840 | 5.8589 | - |
| 1.6266 | 29850 | 5.7878 | - |
| 1.6272 | 29860 | 5.6593 | - |
| 1.6277 | 29870 | 5.6771 | - |
| 1.6282 | 29880 | 5.8937 | - |
| 1.6288 | 29890 | 5.7199 | - |
| 1.6293 | 29900 | 5.8263 | - |
| 1.6299 | 29910 | 5.7049 | - |
| 1.6304 | 29920 | 5.5607 | - |
| 1.6310 | 29930 | 5.7573 | - |
| 1.6315 | 29940 | 5.7278 | - |
| 1.6321 | 29950 | 5.9191 | - |
| 1.6326 | 29960 | 5.8683 | - |
| 1.6332 | 29970 | 5.6242 | - |
| 1.6337 | 29980 | 5.5704 | - |
| 1.6342 | 29990 | 5.7111 | - |
| 1.6348 | 30000 | 5.7193 | 0.6889 |
| 1.6353 | 30010 | 5.654 | - |
| 1.6359 | 30020 | 5.5973 | - |
| 1.6364 | 30030 | 5.5024 | - |
| 1.6370 | 30040 | 5.7089 | - |
| 1.6375 | 30050 | 5.6795 | - |
| 1.6381 | 30060 | 5.5063 | - |
| 1.6386 | 30070 | 5.7094 | - |
| 1.6391 | 30080 | 5.4431 | - |
| 1.6397 | 30090 | 5.6947 | - |
| 1.6402 | 30100 | 5.3829 | - |
| 1.6408 | 30110 | 5.7455 | - |
| 1.6413 | 30120 | 5.7797 | - |
| 1.6419 | 30130 | 5.785 | - |
| 1.6424 | 30140 | 5.7418 | - |
| 1.6430 | 30150 | 5.7535 | - |
| 1.6435 | 30160 | 5.6852 | - |
| 1.6441 | 30170 | 5.8144 | - |
| 1.6446 | 30180 | 5.3938 | - |
| 1.6451 | 30190 | 5.7157 | - |
| 1.6457 | 30200 | 5.5586 | - |
| 1.6462 | 30210 | 5.7235 | - |
| 1.6468 | 30220 | 5.827 | - |
| 1.6473 | 30230 | 5.5521 | - |
| 1.6479 | 30240 | 5.5473 | - |
| 1.6484 | 30250 | 5.7494 | - |
| 1.6490 | 30260 | 5.7797 | - |
| 1.6495 | 30270 | 5.5488 | - |
| 1.6500 | 30280 | 5.682 | - |
| 1.6506 | 30290 | 5.7903 | - |
| 1.6511 | 30300 | 5.6737 | - |
| 1.6517 | 30310 | 5.6469 | - |
| 1.6522 | 30320 | 5.5287 | - |
| 1.6528 | 30330 | 5.7717 | - |
| 1.6533 | 30340 | 5.5641 | - |
| 1.6539 | 30350 | 5.5207 | - |
| 1.6544 | 30360 | 5.4624 | - |
| 1.6550 | 30370 | 5.5833 | - |
| 1.6555 | 30380 | 5.7176 | - |
| 1.6560 | 30390 | 5.7965 | - |
| 1.6566 | 30400 | 5.8115 | - |
| 1.6571 | 30410 | 5.5988 | - |
| 1.6577 | 30420 | 5.5256 | - |
| 1.6582 | 30430 | 5.8447 | - |
| 1.6588 | 30440 | 5.5812 | - |
| 1.6593 | 30450 | 5.789 | - |
| 1.6599 | 30460 | 5.5352 | - |
| 1.6604 | 30470 | 5.4437 | - |
| 1.6609 | 30480 | 5.7246 | - |
| 1.6615 | 30490 | 5.5639 | - |
| 1.6620 | 30500 | 5.8095 | - |
| 1.6626 | 30510 | 5.7278 | - |
| 1.6631 | 30520 | 5.9051 | - |
| 1.6637 | 30530 | 5.9434 | - |
| 1.6642 | 30540 | 5.6563 | - |
| 1.6648 | 30550 | 5.7072 | - |
| 1.6653 | 30560 | 5.7824 | - |
| 1.6658 | 30570 | 5.5596 | - |
| 1.6664 | 30580 | 5.6886 | - |
| 1.6669 | 30590 | 5.719 | - |
| 1.6675 | 30600 | 5.6862 | - |
| 1.6680 | 30610 | 5.7144 | - |
| 1.6686 | 30620 | 5.6461 | - |
| 1.6691 | 30630 | 5.7573 | - |
| 1.6697 | 30640 | 6.112 | - |
| 1.6702 | 30650 | 5.6665 | - |
| 1.6708 | 30660 | 5.6386 | - |
| 1.6713 | 30670 | 5.4544 | - |
| 1.6718 | 30680 | 5.4822 | - |
| 1.6724 | 30690 | 5.8188 | - |
| 1.6729 | 30700 | 5.7899 | - |
| 1.6735 | 30710 | 5.5723 | - |
| 1.6740 | 30720 | 5.8721 | - |
| 1.6746 | 30730 | 5.5292 | - |
| 1.6751 | 30740 | 5.6441 | - |
| 1.6757 | 30750 | 5.5736 | - |
| 1.6762 | 30760 | 5.3938 | - |
| 1.6767 | 30770 | 5.6371 | - |
| 1.6773 | 30780 | 5.4393 | - |
| 1.6778 | 30790 | 5.6772 | - |
| 1.6784 | 30800 | 5.6293 | - |
| 1.6789 | 30810 | 5.8715 | - |
| 1.6795 | 30820 | 5.6253 | - |
| 1.6800 | 30830 | 5.7036 | - |
| 1.6806 | 30840 | 5.7364 | - |
| 1.6811 | 30850 | 5.4044 | - |
| 1.6817 | 30860 | 5.5957 | - |
| 1.6822 | 30870 | 5.7117 | - |
| 1.6827 | 30880 | 5.7721 | - |
| 1.6833 | 30890 | 5.7709 | - |
| 1.6838 | 30900 | 5.6879 | - |
| 1.6844 | 30910 | 5.798 | - |
| 1.6849 | 30920 | 5.7243 | - |
| 1.6855 | 30930 | 5.5648 | - |
| 1.6860 | 30940 | 5.8282 | - |
| 1.6866 | 30950 | 5.8084 | - |
| 1.6871 | 30960 | 5.9778 | - |
| 1.6876 | 30970 | 5.745 | - |
| 1.6882 | 30980 | 5.7515 | - |
| 1.6887 | 30990 | 5.7775 | - |
| 1.6893 | 31000 | 5.7845 | 0.6873 |
| 1.6898 | 31010 | 5.6783 | - |
| 1.6904 | 31020 | 5.662 | - |
| 1.6909 | 31030 | 5.7855 | - |
| 1.6915 | 31040 | 5.7237 | - |
| 1.6920 | 31050 | 5.4847 | - |
| 1.6926 | 31060 | 5.6313 | - |
| 1.6931 | 31070 | 5.6081 | - |
| 1.6936 | 31080 | 5.6161 | - |
| 1.6942 | 31090 | 5.5648 | - |
| 1.6947 | 31100 | 5.7852 | - |
| 1.6953 | 31110 | 5.5981 | - |
| 1.6958 | 31120 | 5.6268 | - |
| 1.6964 | 31130 | 5.5979 | - |
| 1.6969 | 31140 | 5.5898 | - |
| 1.6975 | 31150 | 5.602 | - |
| 1.6980 | 31160 | 5.8336 | - |
| 1.6985 | 31170 | 5.7184 | - |
| 1.6991 | 31180 | 5.6339 | - |
| 1.6996 | 31190 | 5.6573 | - |
| 1.7002 | 31200 | 5.771 | - |
| 1.7007 | 31210 | 5.5427 | - |
| 1.7013 | 31220 | 5.764 | - |
| 1.7018 | 31230 | 5.5381 | - |
| 1.7024 | 31240 | 5.8032 | - |
| 1.7029 | 31250 | 5.7267 | - |
| 1.7034 | 31260 | 5.8322 | - |
| 1.7040 | 31270 | 5.4837 | - |
| 1.7045 | 31280 | 5.7879 | - |
| 1.7051 | 31290 | 5.7744 | - |
| 1.7056 | 31300 | 5.6699 | - |
| 1.7062 | 31310 | 5.7624 | - |
| 1.7067 | 31320 | 5.8476 | - |
| 1.7073 | 31330 | 5.5348 | - |
| 1.7078 | 31340 | 5.6769 | - |
| 1.7084 | 31350 | 5.7064 | - |
| 1.7089 | 31360 | 5.5187 | - |
| 1.7094 | 31370 | 5.4888 | - |
| 1.7100 | 31380 | 5.5563 | - |
| 1.7105 | 31390 | 5.9537 | - |
| 1.7111 | 31400 | 5.5141 | - |
| 1.7116 | 31410 | 5.6725 | - |
| 1.7122 | 31420 | 5.6018 | - |
| 1.7127 | 31430 | 5.6273 | - |
| 1.7133 | 31440 | 5.4704 | - |
| 1.7138 | 31450 | 5.6497 | - |
| 1.7143 | 31460 | 5.6116 | - |
| 1.7149 | 31470 | 5.8167 | - |
| 1.7154 | 31480 | 5.7666 | - |
| 1.7160 | 31490 | 5.5891 | - |
| 1.7165 | 31500 | 5.5312 | - |
| 1.7171 | 31510 | 5.4992 | - |
| 1.7176 | 31520 | 5.771 | - |
| 1.7182 | 31530 | 5.6161 | - |
| 1.7187 | 31540 | 5.8864 | - |
| 1.7193 | 31550 | 5.7855 | - |
| 1.7198 | 31560 | 5.6009 | - |
| 1.7203 | 31570 | 5.9746 | - |
| 1.7209 | 31580 | 5.6955 | - |
| 1.7214 | 31590 | 5.7307 | - |
| 1.7220 | 31600 | 5.9437 | - |
| 1.7225 | 31610 | 5.7426 | - |
| 1.7231 | 31620 | 5.4789 | - |
| 1.7236 | 31630 | 5.444 | - |
| 1.7242 | 31640 | 5.8309 | - |
| 1.7247 | 31650 | 5.5281 | - |
| 1.7252 | 31660 | 5.779 | - |
| 1.7258 | 31670 | 5.7055 | - |
| 1.7263 | 31680 | 5.5645 | - |
| 1.7269 | 31690 | 5.8086 | - |
| 1.7274 | 31700 | 5.7741 | - |
| 1.7280 | 31710 | 5.7854 | - |
| 1.7285 | 31720 | 5.9078 | - |
| 1.7291 | 31730 | 5.5895 | - |
| 1.7296 | 31740 | 5.6775 | - |
| 1.7302 | 31750 | 5.6494 | - |
| 1.7307 | 31760 | 5.7404 | - |
| 1.7312 | 31770 | 5.6914 | - |
| 1.7318 | 31780 | 5.7185 | - |
| 1.7323 | 31790 | 5.6688 | - |
| 1.7329 | 31800 | 5.4683 | - |
| 1.7334 | 31810 | 5.5454 | - |
| 1.7340 | 31820 | 5.6589 | - |
| 1.7345 | 31830 | 5.6749 | - |
| 1.7351 | 31840 | 5.7791 | - |
| 1.7356 | 31850 | 5.8153 | - |
| 1.7361 | 31860 | 5.7124 | - |
| 1.7367 | 31870 | 5.5871 | - |
| 1.7372 | 31880 | 5.729 | - |
| 1.7378 | 31890 | 5.5558 | - |
| 1.7383 | 31900 | 5.4533 | - |
| 1.7389 | 31910 | 5.4954 | - |
| 1.7394 | 31920 | 5.812 | - |
| 1.7400 | 31930 | 5.8792 | - |
| 1.7405 | 31940 | 5.634 | - |
| 1.7410 | 31950 | 5.7409 | - |
| 1.7416 | 31960 | 5.4621 | - |
| 1.7421 | 31970 | 5.8657 | - |
| 1.7427 | 31980 | 5.5149 | - |
| 1.7432 | 31990 | 5.7506 | - |
| 1.7438 | 32000 | 5.6806 | 0.6871 |
| 1.7443 | 32010 | 5.6679 | - |
| 1.7449 | 32020 | 5.6633 | - |
| 1.7454 | 32030 | 5.718 | - |
| 1.7460 | 32040 | 5.8698 | - |
| 1.7465 | 32050 | 5.6648 | - |
| 1.7470 | 32060 | 5.8475 | - |
| 1.7476 | 32070 | 5.5043 | - |
| 1.7481 | 32080 | 5.7619 | - |
| 1.7487 | 32090 | 5.7399 | - |
| 1.7492 | 32100 | 5.749 | - |
| 1.7498 | 32110 | 5.5657 | - |
| 1.7503 | 32120 | 5.7336 | - |
| 1.7509 | 32130 | 5.7157 | - |
| 1.7514 | 32140 | 5.671 | - |
| 1.7519 | 32150 | 5.9447 | - |
| 1.7525 | 32160 | 5.854 | - |
| 1.7530 | 32170 | 5.5469 | - |
| 1.7536 | 32180 | 5.6186 | - |
| 1.7541 | 32190 | 5.732 | - |
| 1.7547 | 32200 | 5.6857 | - |
| 1.7552 | 32210 | 5.62 | - |
| 1.7558 | 32220 | 5.7521 | - |
| 1.7563 | 32230 | 5.7421 | - |
| 1.7569 | 32240 | 5.4737 | - |
| 1.7574 | 32250 | 5.7081 | - |
| 1.7579 | 32260 | 5.7578 | - |
| 1.7585 | 32270 | 5.8273 | - |
| 1.7590 | 32280 | 5.8603 | - |
| 1.7596 | 32290 | 5.6179 | - |
| 1.7601 | 32300 | 5.5853 | - |
| 1.7607 | 32310 | 5.7181 | - |
| 1.7612 | 32320 | 5.5794 | - |
| 1.7618 | 32330 | 6.0057 | - |
| 1.7623 | 32340 | 5.7562 | - |
| 1.7628 | 32350 | 5.7242 | - |
| 1.7634 | 32360 | 5.6413 | - |
| 1.7639 | 32370 | 5.5051 | - |
| 1.7645 | 32380 | 5.5681 | - |
| 1.7650 | 32390 | 5.6019 | - |
| 1.7656 | 32400 | 5.6502 | - |
| 1.7661 | 32410 | 5.6975 | - |
| 1.7667 | 32420 | 5.6019 | - |
| 1.7672 | 32430 | 5.5178 | - |
| 1.7678 | 32440 | 5.5857 | - |
| 1.7683 | 32450 | 5.5962 | - |
| 1.7688 | 32460 | 5.7357 | - |
| 1.7694 | 32470 | 5.7212 | - |
| 1.7699 | 32480 | 5.6527 | - |
| 1.7705 | 32490 | 5.5547 | - |
| 1.7710 | 32500 | 5.8351 | - |
| 1.7716 | 32510 | 5.7251 | - |
| 1.7721 | 32520 | 5.509 | - |
| 1.7727 | 32530 | 5.8272 | - |
| 1.7732 | 32540 | 5.6932 | - |
| 1.7737 | 32550 | 5.721 | - |
| 1.7743 | 32560 | 5.7441 | - |
| 1.7748 | 32570 | 5.5458 | - |
| 1.7754 | 32580 | 5.782 | - |
| 1.7759 | 32590 | 5.8056 | - |
| 1.7765 | 32600 | 5.4873 | - |
| 1.7770 | 32610 | 5.7777 | - |
| 1.7776 | 32620 | 5.5349 | - |
| 1.7781 | 32630 | 5.4921 | - |
| 1.7786 | 32640 | 5.71 | - |
| 1.7792 | 32650 | 5.6658 | - |
| 1.7797 | 32660 | 5.6124 | - |
| 1.7803 | 32670 | 5.5367 | - |
| 1.7808 | 32680 | 5.6653 | - |
| 1.7814 | 32690 | 5.7634 | - |
| 1.7819 | 32700 | 5.5239 | - |
| 1.7825 | 32710 | 5.9019 | - |
| 1.7830 | 32720 | 5.7594 | - |
| 1.7836 | 32730 | 5.6813 | - |
| 1.7841 | 32740 | 5.3931 | - |
| 1.7846 | 32750 | 5.6076 | - |
| 1.7852 | 32760 | 5.5156 | - |
| 1.7857 | 32770 | 5.5532 | - |
| 1.7863 | 32780 | 5.8919 | - |
| 1.7868 | 32790 | 5.6434 | - |
| 1.7874 | 32800 | 5.6488 | - |
| 1.7879 | 32810 | 5.8567 | - |
| 1.7885 | 32820 | 5.7329 | - |
| 1.7890 | 32830 | 5.8574 | - |
| 1.7895 | 32840 | 5.8355 | - |
| 1.7901 | 32850 | 5.6796 | - |
| 1.7906 | 32860 | 5.6217 | - |
| 1.7912 | 32870 | 5.503 | - |
| 1.7917 | 32880 | 5.3437 | - |
| 1.7923 | 32890 | 5.9588 | - |
| 1.7928 | 32900 | 5.81 | - |
| 1.7934 | 32910 | 5.723 | - |
| 1.7939 | 32920 | 5.7631 | - |
| 1.7945 | 32930 | 5.5437 | - |
| 1.7950 | 32940 | 5.6433 | - |
| 1.7955 | 32950 | 5.7399 | - |
| 1.7961 | 32960 | 5.6701 | - |
| 1.7966 | 32970 | 5.6051 | - |
| 1.7972 | 32980 | 5.6888 | - |
| 1.7977 | 32990 | 5.689 | - |
| 1.7983 | 33000 | 5.7862 | 0.6859 |
| 1.7988 | 33010 | 5.8417 | - |
| 1.7994 | 33020 | 5.8542 | - |
| 1.7999 | 33030 | 5.6515 | - |
| 1.8004 | 33040 | 5.683 | - |
| 1.8010 | 33050 | 5.8174 | - |
| 1.8015 | 33060 | 5.6655 | - |
| 1.8021 | 33070 | 5.5791 | - |
| 1.8026 | 33080 | 5.6167 | - |
| 1.8032 | 33090 | 5.4275 | - |
| 1.8037 | 33100 | 5.808 | - |
| 1.8043 | 33110 | 5.8118 | - |
| 1.8048 | 33120 | 5.5954 | - |
| 1.8054 | 33130 | 5.5942 | - |
| 1.8059 | 33140 | 5.893 | - |
| 1.8064 | 33150 | 5.6614 | - |
| 1.8070 | 33160 | 5.9083 | - |
| 1.8075 | 33170 | 5.6049 | - |
| 1.8081 | 33180 | 5.6802 | - |
| 1.8086 | 33190 | 5.5609 | - |
| 1.8092 | 33200 | 5.9247 | - |
| 1.8097 | 33210 | 5.8091 | - |
| 1.8103 | 33220 | 5.6165 | - |
| 1.8108 | 33230 | 5.8356 | - |
| 1.8113 | 33240 | 5.7646 | - |
| 1.8119 | 33250 | 5.6806 | - |
| 1.8124 | 33260 | 5.7327 | - |
| 1.8130 | 33270 | 5.7655 | - |
| 1.8135 | 33280 | 5.4868 | - |
| 1.8141 | 33290 | 5.6266 | - |
| 1.8146 | 33300 | 5.541 | - |
| 1.8152 | 33310 | 5.9469 | - |
| 1.8157 | 33320 | 5.7606 | - |
| 1.8162 | 33330 | 5.4922 | - |
| 1.8168 | 33340 | 5.6335 | - |
| 1.8173 | 33350 | 5.6464 | - |
| 1.8179 | 33360 | 5.9194 | - |
| 1.8184 | 33370 | 5.8127 | - |
| 1.8190 | 33380 | 5.8568 | - |
| 1.8195 | 33390 | 5.6204 | - |
| 1.8201 | 33400 | 5.6684 | - |
| 1.8206 | 33410 | 5.5704 | - |
| 1.8212 | 33420 | 5.9843 | - |
| 1.8217 | 33430 | 5.6741 | - |
| 1.8222 | 33440 | 5.8343 | - |
| 1.8228 | 33450 | 5.7066 | - |
| 1.8233 | 33460 | 5.1977 | - |
| 1.8239 | 33470 | 5.3886 | - |
| 1.8244 | 33480 | 5.6268 | - |
| 1.8250 | 33490 | 6.0749 | - |
| 1.8255 | 33500 | 5.8411 | - |
| 1.8261 | 33510 | 5.4993 | - |
| 1.8266 | 33520 | 5.6929 | - |
| 1.8271 | 33530 | 5.7236 | - |
| 1.8277 | 33540 | 5.4794 | - |
| 1.8282 | 33550 | 5.551 | - |
| 1.8288 | 33560 | 5.6264 | - |
| 1.8293 | 33570 | 5.6655 | - |
| 1.8299 | 33580 | 5.6782 | - |
| 1.8304 | 33590 | 5.7342 | - |
| 1.8310 | 33600 | 5.5961 | - |
| 1.8315 | 33610 | 5.7811 | - |
| 1.8321 | 33620 | 5.8038 | - |
| 1.8326 | 33630 | 5.6818 | - |
| 1.8331 | 33640 | 5.5994 | - |
| 1.8337 | 33650 | 5.4841 | - |
| 1.8342 | 33660 | 5.453 | - |
| 1.8348 | 33670 | 5.5379 | - |
| 1.8353 | 33680 | 5.606 | - |
| 1.8359 | 33690 | 5.6328 | - |
| 1.8364 | 33700 | 5.7197 | - |
| 1.8370 | 33710 | 5.7969 | - |
| 1.8375 | 33720 | 5.5524 | - |
| 1.8380 | 33730 | 5.6468 | - |
| 1.8386 | 33740 | 5.5912 | - |
| 1.8391 | 33750 | 5.7547 | - |
| 1.8397 | 33760 | 5.688 | - |
| 1.8402 | 33770 | 5.7332 | - |
| 1.8408 | 33780 | 5.6257 | - |
| 1.8413 | 33790 | 5.4747 | - |
| 1.8419 | 33800 | 5.7266 | - |
| 1.8424 | 33810 | 5.7552 | - |
| 1.8430 | 33820 | 5.6194 | - |
| 1.8435 | 33830 | 5.6956 | - |
| 1.8440 | 33840 | 5.7892 | - |
| 1.8446 | 33850 | 5.7188 | - |
| 1.8451 | 33860 | 5.7134 | - |
| 1.8457 | 33870 | 5.8227 | - |
| 1.8462 | 33880 | 5.8103 | - |
| 1.8468 | 33890 | 5.8671 | - |
| 1.8473 | 33900 | 5.6552 | - |
| 1.8479 | 33910 | 5.6549 | - |
| 1.8484 | 33920 | 5.663 | - |
| 1.8489 | 33930 | 5.6402 | - |
| 1.8495 | 33940 | 5.8898 | - |
| 1.8500 | 33950 | 5.619 | - |
| 1.8506 | 33960 | 5.8954 | - |
| 1.8511 | 33970 | 5.7821 | - |
| 1.8517 | 33980 | 5.8052 | - |
| 1.8522 | 33990 | 5.8236 | - |
| 1.8528 | 34000 | 5.4399 | 0.6891 |
| 1.8533 | 34010 | 5.4208 | - |
| 1.8538 | 34020 | 5.5499 | - |
| 1.8544 | 34030 | 5.4838 | - |
| 1.8549 | 34040 | 5.7623 | - |
| 1.8555 | 34050 | 5.6627 | - |
| 1.8560 | 34060 | 5.639 | - |
| 1.8566 | 34070 | 5.5299 | - |
| 1.8571 | 34080 | 5.8438 | - |
| 1.8577 | 34090 | 5.6988 | - |
| 1.8582 | 34100 | 5.4479 | - |
| 1.8588 | 34110 | 5.5851 | - |
| 1.8593 | 34120 | 5.7237 | - |
| 1.8598 | 34130 | 5.5488 | - |
| 1.8604 | 34140 | 5.5068 | - |
| 1.8609 | 34150 | 5.5405 | - |
| 1.8615 | 34160 | 5.7482 | - |
| 1.8620 | 34170 | 5.6435 | - |
| 1.8626 | 34180 | 5.6869 | - |
| 1.8631 | 34190 | 5.6155 | - |
| 1.8637 | 34200 | 5.5449 | - |
| 1.8642 | 34210 | 5.6958 | - |
| 1.8647 | 34220 | 5.6509 | - |
| 1.8653 | 34230 | 5.8805 | - |
| 1.8658 | 34240 | 5.8694 | - |
| 1.8664 | 34250 | 5.6562 | - |
| 1.8669 | 34260 | 5.6974 | - |
| 1.8675 | 34270 | 5.902 | - |
| 1.8680 | 34280 | 5.4552 | - |
| 1.8686 | 34290 | 5.6811 | - |
| 1.8691 | 34300 | 5.7376 | - |
| 1.8697 | 34310 | 5.7682 | - |
| 1.8702 | 34320 | 5.5275 | - |
| 1.8707 | 34330 | 5.5649 | - |
| 1.8713 | 34340 | 5.8218 | - |
| 1.8718 | 34350 | 5.49 | - |
| 1.8724 | 34360 | 5.5707 | - |
| 1.8729 | 34370 | 5.6332 | - |
| 1.8735 | 34380 | 5.814 | - |
| 1.8740 | 34390 | 5.6897 | - |
| 1.8746 | 34400 | 5.8695 | - |
| 1.8751 | 34410 | 5.7627 | - |
| 1.8756 | 34420 | 5.5166 | - |
| 1.8762 | 34430 | 5.5838 | - |
| 1.8767 | 34440 | 5.598 | - |
| 1.8773 | 34450 | 5.742 | - |
| 1.8778 | 34460 | 5.4895 | - |
| 1.8784 | 34470 | 5.752 | - |
| 1.8789 | 34480 | 5.5787 | - |
| 1.8795 | 34490 | 5.7627 | - |
| 1.8800 | 34500 | 5.5178 | - |
| 1.8806 | 34510 | 5.3932 | - |
| 1.8811 | 34520 | 5.6751 | - |
| 1.8816 | 34530 | 5.7778 | - |
| 1.8822 | 34540 | 5.6605 | - |
| 1.8827 | 34550 | 5.6274 | - |
| 1.8833 | 34560 | 5.7469 | - |
| 1.8838 | 34570 | 5.2724 | - |
| 1.8844 | 34580 | 5.6747 | - |
| 1.8849 | 34590 | 5.5507 | - |
| 1.8855 | 34600 | 5.5879 | - |
| 1.8860 | 34610 | 5.6211 | - |
| 1.8865 | 34620 | 5.8733 | - |
| 1.8871 | 34630 | 5.9574 | - |
| 1.8876 | 34640 | 5.6856 | - |
| 1.8882 | 34650 | 5.7685 | - |
| 1.8887 | 34660 | 5.6194 | - |
| 1.8893 | 34670 | 5.6589 | - |
| 1.8898 | 34680 | 5.7623 | - |
| 1.8904 | 34690 | 5.6608 | - |
| 1.8909 | 34700 | 5.6889 | - |
| 1.8915 | 34710 | 5.6463 | - |
| 1.8920 | 34720 | 5.8174 | - |
| 1.8925 | 34730 | 5.7572 | - |
| 1.8931 | 34740 | 5.3284 | - |
| 1.8936 | 34750 | 5.7549 | - |
| 1.8942 | 34760 | 5.6623 | - |
| 1.8947 | 34770 | 5.7079 | - |
| 1.8953 | 34780 | 5.7508 | - |
| 1.8958 | 34790 | 5.6192 | - |
| 1.8964 | 34800 | 5.5665 | - |
| 1.8969 | 34810 | 5.5827 | - |
| 1.8974 | 34820 | 5.5272 | - |
| 1.8980 | 34830 | 5.5747 | - |
| 1.8985 | 34840 | 5.5061 | - |
| 1.8991 | 34850 | 5.6655 | - |
| 1.8996 | 34860 | 5.8277 | - |
| 1.9002 | 34870 | 5.7572 | - |
| 1.9007 | 34880 | 5.7303 | - |
| 1.9013 | 34890 | 5.6255 | - |
| 1.9018 | 34900 | 5.7256 | - |
| 1.9023 | 34910 | 5.4793 | - |
| 1.9029 | 34920 | 5.7313 | - |
| 1.9034 | 34930 | 5.6073 | - |
| 1.9040 | 34940 | 5.5428 | - |
| 1.9045 | 34950 | 5.3583 | - |
| 1.9051 | 34960 | 5.9431 | - |
| 1.9056 | 34970 | 5.7008 | - |
| 1.9062 | 34980 | 5.6749 | - |
| 1.9067 | 34990 | 5.7212 | - |
| 1.9073 | 35000 | 5.6087 | 0.6919 |
| 1.9078 | 35010 | 5.6965 | - |
| 1.9083 | 35020 | 5.6101 | - |
| 1.9089 | 35030 | 5.6671 | - |
| 1.9094 | 35040 | 5.4685 | - |
| 1.9100 | 35050 | 5.7413 | - |
| 1.9105 | 35060 | 5.7453 | - |
| 1.9111 | 35070 | 5.5792 | - |
| 1.9116 | 35080 | 5.4403 | - |
| 1.9122 | 35090 | 5.7417 | - |
| 1.9127 | 35100 | 5.82 | - |
| 1.9132 | 35110 | 5.6626 | - |
| 1.9138 | 35120 | 5.7229 | - |
| 1.9143 | 35130 | 5.5683 | - |
| 1.9149 | 35140 | 5.6504 | - |
| 1.9154 | 35150 | 5.9152 | - |
| 1.9160 | 35160 | 5.5265 | - |
| 1.9165 | 35170 | 5.6428 | - |
| 1.9171 | 35180 | 5.71 | - |
| 1.9176 | 35190 | 5.5394 | - |
| 1.9182 | 35200 | 5.8338 | - |
| 1.9187 | 35210 | 5.9323 | - |
| 1.9192 | 35220 | 5.5896 | - |
| 1.9198 | 35230 | 5.6332 | - |
| 1.9203 | 35240 | 5.5849 | - |
| 1.9209 | 35250 | 5.6515 | - |
| 1.9214 | 35260 | 6.0447 | - |
| 1.9220 | 35270 | 5.6565 | - |
| 1.9225 | 35280 | 5.5185 | - |
| 1.9231 | 35290 | 5.8831 | - |
| 1.9236 | 35300 | 5.6565 | - |
| 1.9241 | 35310 | 5.6972 | - |
| 1.9247 | 35320 | 5.6408 | - |
| 1.9252 | 35330 | 5.646 | - |
| 1.9258 | 35340 | 5.7322 | - |
| 1.9263 | 35350 | 5.6933 | - |
| 1.9269 | 35360 | 5.597 | - |
| 1.9274 | 35370 | 5.4795 | - |
| 1.9280 | 35380 | 5.7338 | - |
| 1.9285 | 35390 | 5.556 | - |
| 1.9291 | 35400 | 5.6779 | - |
| 1.9296 | 35410 | 5.9131 | - |
| 1.9301 | 35420 | 5.5778 | - |
| 1.9307 | 35430 | 5.778 | - |
| 1.9312 | 35440 | 5.746 | - |
| 1.9318 | 35450 | 5.7227 | - |
| 1.9323 | 35460 | 5.6349 | - |
| 1.9329 | 35470 | 5.7448 | - |
| 1.9334 | 35480 | 5.8197 | - |
| 1.9340 | 35490 | 5.8063 | - |
| 1.9345 | 35500 | 5.6896 | - |
| 1.9350 | 35510 | 5.4051 | - |
| 1.9356 | 35520 | 5.6716 | - |
| 1.9361 | 35530 | 5.6549 | - |
| 1.9367 | 35540 | 5.6678 | - |
| 1.9372 | 35550 | 5.7172 | - |
| 1.9378 | 35560 | 5.5531 | - |
| 1.9383 | 35570 | 5.6242 | - |
| 1.9389 | 35580 | 5.8103 | - |
| 1.9394 | 35590 | 5.6672 | - |
| 1.9399 | 35600 | 5.6362 | - |
| 1.9405 | 35610 | 5.6261 | - |
| 1.9410 | 35620 | 5.7776 | - |
| 1.9416 | 35630 | 5.7406 | - |
| 1.9421 | 35640 | 5.5377 | - |
| 1.9427 | 35650 | 5.5704 | - |
| 1.9432 | 35660 | 5.6037 | - |
| 1.9438 | 35670 | 5.5436 | - |
| 1.9443 | 35680 | 5.7578 | - |
| 1.9449 | 35690 | 5.5539 | - |
| 1.9454 | 35700 | 5.6299 | - |
| 1.9459 | 35710 | 5.6777 | - |
| 1.9465 | 35720 | 5.53 | - |
| 1.9470 | 35730 | 5.6062 | - |
| 1.9476 | 35740 | 5.2337 | - |
| 1.9481 | 35750 | 5.6892 | - |
| 1.9487 | 35760 | 5.9946 | - |
| 1.9492 | 35770 | 5.6165 | - |
| 1.9498 | 35780 | 5.5814 | - |
| 1.9503 | 35790 | 5.7497 | - |
| 1.9508 | 35800 | 5.6992 | - |
| 1.9514 | 35810 | 5.7048 | - |
| 1.9519 | 35820 | 5.7067 | - |
| 1.9525 | 35830 | 5.7498 | - |
| 1.9530 | 35840 | 5.7112 | - |
| 1.9536 | 35850 | 5.7333 | - |
| 1.9541 | 35860 | 5.6899 | - |
| 1.9547 | 35870 | 5.6267 | - |
| 1.9552 | 35880 | 5.7404 | - |
| 1.9558 | 35890 | 5.66 | - |
| 1.9563 | 35900 | 5.7127 | - |
| 1.9568 | 35910 | 5.7772 | - |
| 1.9574 | 35920 | 5.6014 | - |
| 1.9579 | 35930 | 5.6505 | - |
| 1.9585 | 35940 | 5.6651 | - |
| 1.9590 | 35950 | 5.5426 | - |
| 1.9596 | 35960 | 5.6431 | - |
| 1.9601 | 35970 | 5.618 | - |
| 1.9607 | 35980 | 5.604 | - |
| 1.9612 | 35990 | 5.672 | - |
| 1.9617 | 36000 | 5.6666 | 0.6908 |
| 1.9623 | 36010 | 5.5219 | - |
| 1.9628 | 36020 | 5.6634 | - |
| 1.9634 | 36030 | 5.572 | - |
| 1.9639 | 36040 | 5.5704 | - |
| 1.9645 | 36050 | 5.497 | - |
| 1.9650 | 36060 | 5.5455 | - |
| 1.9656 | 36070 | 5.5117 | - |
| 1.9661 | 36080 | 5.6914 | - |
| 1.9667 | 36090 | 5.5047 | - |
| 1.9672 | 36100 | 5.7191 | - |
| 1.9677 | 36110 | 5.4929 | - |
| 1.9683 | 36120 | 5.5093 | - |
| 1.9688 | 36130 | 5.6284 | - |
| 1.9694 | 36140 | 5.6813 | - |
| 1.9699 | 36150 | 5.5116 | - |
| 1.9705 | 36160 | 5.6879 | - |
| 1.9710 | 36170 | 5.7322 | - |
| 1.9716 | 36180 | 5.441 | - |
| 1.9721 | 36190 | 5.8573 | - |
| 1.9726 | 36200 | 5.8589 | - |
| 1.9732 | 36210 | 5.8021 | - |
| 1.9737 | 36220 | 5.5584 | - |
| 1.9743 | 36230 | 5.9301 | - |
| 1.9748 | 36240 | 5.8358 | - |
| 1.9754 | 36250 | 5.4245 | - |
| 1.9759 | 36260 | 5.6385 | - |
| 1.9765 | 36270 | 5.5535 | - |
| 1.9770 | 36280 | 5.6287 | - |
| 1.9775 | 36290 | 5.9215 | - |
| 1.9781 | 36300 | 5.5151 | - |
| 1.9786 | 36310 | 5.7416 | - |
| 1.9792 | 36320 | 5.5807 | - |
| 1.9797 | 36330 | 5.6892 | - |
| 1.9803 | 36340 | 5.5453 | - |
| 1.9808 | 36350 | 5.5685 | - |
| 1.9814 | 36360 | 5.6573 | - |
| 1.9819 | 36370 | 5.9029 | - |
| 1.9825 | 36380 | 5.586 | - |
| 1.9830 | 36390 | 5.7198 | - |
| 1.9835 | 36400 | 5.8645 | - |
| 1.9841 | 36410 | 5.7062 | - |
| 1.9846 | 36420 | 5.5645 | - |
| 1.9852 | 36430 | 5.5695 | - |
| 1.9857 | 36440 | 5.7157 | - |
| 1.9863 | 36450 | 5.7856 | - |
| 1.9868 | 36460 | 5.5422 | - |
| 1.9874 | 36470 | 5.5216 | - |
| 1.9879 | 36480 | 5.9494 | - |
| 1.9884 | 36490 | 5.5798 | - |
| 1.9890 | 36500 | 5.5782 | - |
| 1.9895 | 36510 | 5.8387 | - |
| 1.9901 | 36520 | 5.6276 | - |
| 1.9906 | 36530 | 5.5961 | - |
| 1.9912 | 36540 | 5.6155 | - |
| 1.9917 | 36550 | 5.6811 | - |
| 1.9923 | 36560 | 5.5413 | - |
| 1.9928 | 36570 | 5.7131 | - |
| 1.9934 | 36580 | 5.5582 | - |
| 1.9939 | 36590 | 5.6131 | - |
| 1.9944 | 36600 | 5.5514 | - |
| 1.9950 | 36610 | 5.8862 | - |
| 1.9955 | 36620 | 5.8448 | - |
| 1.9961 | 36630 | 5.7975 | - |
| 1.9966 | 36640 | 5.4738 | - |
| 1.9972 | 36650 | 5.6607 | - |
| 1.9977 | 36660 | 5.6491 | - |
| 1.9983 | 36670 | 5.6841 | - |
| 1.9988 | 36680 | 5.9853 | - |
| 1.9993 | 36690 | 5.7024 | - |
| 1.9999 | 36700 | 5.5852 | - |
| 2.0004 | 36710 | 5.4867 | - |
| 2.0010 | 36720 | 5.8128 | - |
| 2.0015 | 36730 | 5.7629 | - |
| 2.0021 | 36740 | 5.5694 | - |
| 2.0026 | 36750 | 5.824 | - |
| 2.0032 | 36760 | 5.8349 | - |
| 2.0037 | 36770 | 5.4195 | - |
| 2.0043 | 36780 | 5.5346 | - |
| 2.0048 | 36790 | 5.6422 | - |
| 2.0053 | 36800 | 5.7905 | - |
| 2.0059 | 36810 | 5.7566 | - |
| 2.0064 | 36820 | 5.5869 | - |
| 2.0070 | 36830 | 5.6272 | - |
| 2.0075 | 36840 | 5.8214 | - |
| 2.0081 | 36850 | 5.7116 | - |
| 2.0086 | 36860 | 5.6784 | - |
| 2.0092 | 36870 | 5.6436 | - |
| 2.0097 | 36880 | 5.4153 | - |
| 2.0102 | 36890 | 5.7278 | - |
| 2.0108 | 36900 | 5.5614 | - |
| 2.0113 | 36910 | 5.5749 | - |
| 2.0119 | 36920 | 5.4978 | - |
| 2.0124 | 36930 | 5.5259 | - |
| 2.0130 | 36940 | 5.6186 | - |
| 2.0135 | 36950 | 5.8284 | - |
| 2.0141 | 36960 | 5.7581 | - |
| 2.0146 | 36970 | 5.6287 | - |
| 2.0151 | 36980 | 5.9027 | - |
| 2.0157 | 36990 | 5.9063 | - |
| 2.0162 | 37000 | 5.7137 | 0.6961 |
| 2.0168 | 37010 | 5.6604 | - |
| 2.0173 | 37020 | 5.4087 | - |
| 2.0179 | 37030 | 5.5908 | - |
| 2.0184 | 37040 | 5.664 | - |
| 2.0190 | 37050 | 5.4668 | - |
| 2.0195 | 37060 | 5.5051 | - |
| 2.0201 | 37070 | 5.7477 | - |
| 2.0206 | 37080 | 5.5589 | - |
| 2.0211 | 37090 | 5.7661 | - |
| 2.0217 | 37100 | 5.781 | - |
| 2.0222 | 37110 | 5.6559 | - |
| 2.0228 | 37120 | 5.8653 | - |
| 2.0233 | 37130 | 5.822 | - |
| 2.0239 | 37140 | 5.6437 | - |
| 2.0244 | 37150 | 5.841 | - |
| 2.0250 | 37160 | 5.6607 | - |
| 2.0255 | 37170 | 5.7918 | - |
| 2.0260 | 37180 | 5.773 | - |
| 2.0266 | 37190 | 6.0197 | - |
| 2.0271 | 37200 | 5.6281 | - |
| 2.0277 | 37210 | 5.5078 | - |
| 2.0282 | 37220 | 5.5753 | - |
| 2.0288 | 37230 | 5.6788 | - |
| 2.0293 | 37240 | 5.6519 | - |
| 2.0299 | 37250 | 5.9167 | - |
| 2.0304 | 37260 | 5.9115 | - |
| 2.0310 | 37270 | 5.654 | - |
| 2.0315 | 37280 | 5.4426 | - |
| 2.0320 | 37290 | 5.8906 | - |
| 2.0326 | 37300 | 5.6337 | - |
| 2.0331 | 37310 | 5.6454 | - |
| 2.0337 | 37320 | 5.4562 | - |
| 2.0342 | 37330 | 5.7621 | - |
| 2.0348 | 37340 | 5.8045 | - |
| 2.0353 | 37350 | 5.3255 | - |
| 2.0359 | 37360 | 5.743 | - |
| 2.0364 | 37370 | 5.3681 | - |
| 2.0369 | 37380 | 5.4915 | - |
| 2.0375 | 37390 | 5.7081 | - |
| 2.0380 | 37400 | 5.8514 | - |
| 2.0386 | 37410 | 5.4222 | - |
| 2.0391 | 37420 | 5.7279 | - |
| 2.0397 | 37430 | 5.8803 | - |
| 2.0402 | 37440 | 5.6503 | - |
| 2.0408 | 37450 | 5.4309 | - |
| 2.0413 | 37460 | 5.538 | - |
| 2.0419 | 37470 | 5.6344 | - |
| 2.0424 | 37480 | 5.5271 | - |
| 2.0429 | 37490 | 5.5232 | - |
| 2.0435 | 37500 | 5.6228 | - |
| 2.0440 | 37510 | 5.5951 | - |
| 2.0446 | 37520 | 5.4975 | - |
| 2.0451 | 37530 | 5.7467 | - |
| 2.0457 | 37540 | 5.577 | - |
| 2.0462 | 37550 | 5.6641 | - |
| 2.0468 | 37560 | 5.539 | - |
| 2.0473 | 37570 | 5.4558 | - |
| 2.0478 | 37580 | 5.6746 | - |
| 2.0484 | 37590 | 5.6234 | - |
| 2.0489 | 37600 | 5.5995 | - |
| 2.0495 | 37610 | 5.8727 | - |
| 2.0500 | 37620 | 5.5337 | - |
| 2.0506 | 37630 | 5.6477 | - |
| 2.0511 | 37640 | 5.5946 | - |
| 2.0517 | 37650 | 5.638 | - |
| 2.0522 | 37660 | 5.4945 | - |
| 2.0527 | 37670 | 5.8507 | - |
| 2.0533 | 37680 | 5.5383 | - |
| 2.0538 | 37690 | 5.5878 | - |
| 2.0544 | 37700 | 5.5849 | - |
| 2.0549 | 37710 | 5.5539 | - |
| 2.0555 | 37720 | 5.7912 | - |
| 2.0560 | 37730 | 5.4514 | - |
| 2.0566 | 37740 | 5.7267 | - |
| 2.0571 | 37750 | 5.7874 | - |
| 2.0577 | 37760 | 5.9095 | - |
| 2.0582 | 37770 | 5.9036 | - |
| 2.0587 | 37780 | 5.709 | - |
| 2.0593 | 37790 | 5.6878 | - |
| 2.0598 | 37800 | 5.5625 | - |
| 2.0604 | 37810 | 5.5179 | - |
| 2.0609 | 37820 | 5.3602 | - |
| 2.0615 | 37830 | 5.455 | - |
| 2.0620 | 37840 | 5.7934 | - |
| 2.0626 | 37850 | 5.4582 | - |
| 2.0631 | 37860 | 5.8142 | - |
| 2.0636 | 37870 | 5.4321 | - |
| 2.0642 | 37880 | 5.5439 | - |
| 2.0647 | 37890 | 5.6879 | - |
| 2.0653 | 37900 | 5.7942 | - |
| 2.0658 | 37910 | 5.5444 | - |
| 2.0664 | 37920 | 5.819 | - |
| 2.0669 | 37930 | 5.8945 | - |
| 2.0675 | 37940 | 5.6771 | - |
| 2.0680 | 37950 | 5.5841 | - |
| 2.0686 | 37960 | 5.6538 | - |
| 2.0691 | 37970 | 5.509 | - |
| 2.0696 | 37980 | 5.6915 | - |
| 2.0702 | 37990 | 5.5237 | - |
| 2.0707 | 38000 | 5.6837 | 0.6884 |
| 2.0713 | 38010 | 5.6613 | - |
| 2.0718 | 38020 | 5.665 | - |
| 2.0724 | 38030 | 5.8732 | - |
| 2.0729 | 38040 | 5.5422 | - |
| 2.0735 | 38050 | 5.6292 | - |
| 2.0740 | 38060 | 5.7262 | - |
| 2.0745 | 38070 | 5.8573 | - |
| 2.0751 | 38080 | 5.763 | - |
| 2.0756 | 38090 | 5.5213 | - |
| 2.0762 | 38100 | 5.6803 | - |
| 2.0767 | 38110 | 5.5856 | - |
| 2.0773 | 38120 | 5.4906 | - |
| 2.0778 | 38130 | 5.6818 | - |
| 2.0784 | 38140 | 5.7125 | - |
| 2.0789 | 38150 | 5.5857 | - |
| 2.0795 | 38160 | 5.5015 | - |
| 2.0800 | 38170 | 5.6506 | - |
| 2.0805 | 38180 | 5.3714 | - |
| 2.0811 | 38190 | 5.383 | - |
| 2.0816 | 38200 | 5.6501 | - |
| 2.0822 | 38210 | 5.6072 | - |
| 2.0827 | 38220 | 5.4825 | - |
| 2.0833 | 38230 | 5.6555 | - |
| 2.0838 | 38240 | 5.8959 | - |
| 2.0844 | 38250 | 5.6524 | - |
| 2.0849 | 38260 | 5.7204 | - |
| 2.0854 | 38270 | 5.6661 | - |
| 2.0860 | 38280 | 5.5431 | - |
| 2.0865 | 38290 | 5.3475 | - |
| 2.0871 | 38300 | 5.2761 | - |
| 2.0876 | 38310 | 5.7001 | - |
| 2.0882 | 38320 | 5.4953 | - |
| 2.0887 | 38330 | 5.6175 | - |
| 2.0893 | 38340 | 5.5405 | - |
| 2.0898 | 38350 | 5.6766 | - |
| 2.0903 | 38360 | 5.8138 | - |
| 2.0909 | 38370 | 5.6525 | - |
| 2.0914 | 38380 | 5.7314 | - |
| 2.0920 | 38390 | 5.712 | - |
| 2.0925 | 38400 | 5.8276 | - |
| 2.0931 | 38410 | 5.5457 | - |
| 2.0936 | 38420 | 5.6108 | - |
| 2.0942 | 38430 | 5.5278 | - |
| 2.0947 | 38440 | 5.6203 | - |
| 2.0953 | 38450 | 5.5805 | - |
| 2.0958 | 38460 | 5.6855 | - |
| 2.0963 | 38470 | 5.6027 | - |
| 2.0969 | 38480 | 5.519 | - |
| 2.0974 | 38490 | 5.534 | - |
| 2.0980 | 38500 | 5.3477 | - |
| 2.0985 | 38510 | 5.7697 | - |
| 2.0991 | 38520 | 5.5658 | - |
| 2.0996 | 38530 | 5.7098 | - |
| 2.1002 | 38540 | 5.5229 | - |
| 2.1007 | 38550 | 5.693 | - |
| 2.1012 | 38560 | 5.6516 | - |
| 2.1018 | 38570 | 5.7561 | - |
| 2.1023 | 38580 | 5.6427 | - |
| 2.1029 | 38590 | 5.4958 | - |
| 2.1034 | 38600 | 5.4725 | - |
| 2.1040 | 38610 | 5.6968 | - |
| 2.1045 | 38620 | 5.7322 | - |
| 2.1051 | 38630 | 5.9095 | - |
| 2.1056 | 38640 | 5.5732 | - |
| 2.1062 | 38650 | 5.2433 | - |
| 2.1067 | 38660 | 5.4063 | - |
| 2.1072 | 38670 | 5.369 | - |
| 2.1078 | 38680 | 5.6998 | - |
| 2.1083 | 38690 | 5.5563 | - |
| 2.1089 | 38700 | 5.7603 | - |
| 2.1094 | 38710 | 5.6548 | - |
| 2.1100 | 38720 | 5.464 | - |
| 2.1105 | 38730 | 5.8525 | - |
| 2.1111 | 38740 | 5.4765 | - |
| 2.1116 | 38750 | 5.7047 | - |
| 2.1121 | 38760 | 5.4599 | - |
| 2.1127 | 38770 | 5.6071 | - |
| 2.1132 | 38780 | 5.8043 | - |
| 2.1138 | 38790 | 5.5625 | - |
| 2.1143 | 38800 | 5.4377 | - |
| 2.1149 | 38810 | 5.3608 | - |
| 2.1154 | 38820 | 5.5746 | - |
| 2.1160 | 38830 | 5.4753 | - |
| 2.1165 | 38840 | 5.6117 | - |
| 2.1171 | 38850 | 5.5018 | - |
| 2.1176 | 38860 | 5.7313 | - |
| 2.1181 | 38870 | 5.544 | - |
| 2.1187 | 38880 | 5.7055 | - |
| 2.1192 | 38890 | 5.6198 | - |
| 2.1198 | 38900 | 5.5045 | - |
| 2.1203 | 38910 | 5.7471 | - |
| 2.1209 | 38920 | 5.7276 | - |
| 2.1214 | 38930 | 5.6122 | - |
| 2.1220 | 38940 | 5.5157 | - |
| 2.1225 | 38950 | 5.6425 | - |
| 2.1230 | 38960 | 5.5836 | - |
| 2.1236 | 38970 | 5.7188 | - |
| 2.1241 | 38980 | 5.4286 | - |
| 2.1247 | 38990 | 5.5195 | - |
| 2.1252 | 39000 | 5.904 | 0.6901 |
| 2.1258 | 39010 | 5.6328 | - |
| 2.1263 | 39020 | 5.6053 | - |
| 2.1269 | 39030 | 5.6856 | - |
| 2.1274 | 39040 | 5.5866 | - |
| 2.1279 | 39050 | 5.5536 | - |
| 2.1285 | 39060 | 5.6467 | - |
| 2.1290 | 39070 | 5.7159 | - |
| 2.1296 | 39080 | 5.3243 | - |
| 2.1301 | 39090 | 5.7955 | - |
| 2.1307 | 39100 | 5.5479 | - |
| 2.1312 | 39110 | 5.7192 | - |
| 2.1318 | 39120 | 5.857 | - |
| 2.1323 | 39130 | 5.6043 | - |
| 2.1329 | 39140 | 5.6366 | - |
| 2.1334 | 39150 | 5.3907 | - |
| 2.1339 | 39160 | 5.4976 | - |
| 2.1345 | 39170 | 5.5185 | - |
| 2.1350 | 39180 | 5.5903 | - |
| 2.1356 | 39190 | 5.6862 | - |
| 2.1361 | 39200 | 5.7643 | - |
| 2.1367 | 39210 | 5.5326 | - |
| 2.1372 | 39220 | 5.7061 | - |
| 2.1378 | 39230 | 5.4921 | - |
| 2.1383 | 39240 | 5.6341 | - |
| 2.1388 | 39250 | 5.7728 | - |
| 2.1394 | 39260 | 5.5848 | - |
| 2.1399 | 39270 | 5.6731 | - |
| 2.1405 | 39280 | 5.6553 | - |
| 2.1410 | 39290 | 5.6981 | - |
| 2.1416 | 39300 | 5.7882 | - |
| 2.1421 | 39310 | 5.7516 | - |
| 2.1427 | 39320 | 5.6181 | - |
| 2.1432 | 39330 | 5.83 | - |
| 2.1438 | 39340 | 5.4934 | - |
| 2.1443 | 39350 | 5.7942 | - |
| 2.1448 | 39360 | 5.4985 | - |
| 2.1454 | 39370 | 5.5094 | - |
| 2.1459 | 39380 | 5.6568 | - |
| 2.1465 | 39390 | 5.7132 | - |
| 2.1470 | 39400 | 5.6266 | - |
| 2.1476 | 39410 | 5.3815 | - |
| 2.1481 | 39420 | 5.7674 | - |
| 2.1487 | 39430 | 5.7849 | - |
| 2.1492 | 39440 | 5.8302 | - |
| 2.1497 | 39450 | 5.7999 | - |
| 2.1503 | 39460 | 5.9153 | - |
| 2.1508 | 39470 | 5.7056 | - |
| 2.1514 | 39480 | 5.4104 | - |
| 2.1519 | 39490 | 5.5541 | - |
| 2.1525 | 39500 | 5.5738 | - |
| 2.1530 | 39510 | 5.5007 | - |
| 2.1536 | 39520 | 5.5745 | - |
| 2.1541 | 39530 | 5.8644 | - |
| 2.1547 | 39540 | 5.9136 | - |
| 2.1552 | 39550 | 5.8693 | - |
| 2.1557 | 39560 | 5.6823 | - |
| 2.1563 | 39570 | 5.6214 | - |
| 2.1568 | 39580 | 5.6019 | - |
| 2.1574 | 39590 | 5.5432 | - |
| 2.1579 | 39600 | 5.6066 | - |
| 2.1585 | 39610 | 5.6059 | - |
| 2.1590 | 39620 | 5.4624 | - |
| 2.1596 | 39630 | 5.7927 | - |
| 2.1601 | 39640 | 5.7084 | - |
| 2.1606 | 39650 | 5.7888 | - |
| 2.1612 | 39660 | 5.6803 | - |
| 2.1617 | 39670 | 5.289 | - |
| 2.1623 | 39680 | 5.4893 | - |
| 2.1628 | 39690 | 5.7256 | - |
| 2.1634 | 39700 | 5.8809 | - |
| 2.1639 | 39710 | 5.7948 | - |
| 2.1645 | 39720 | 5.7562 | - |
| 2.1650 | 39730 | 5.5545 | - |
| 2.1655 | 39740 | 5.6683 | - |
| 2.1661 | 39750 | 5.5002 | - |
| 2.1666 | 39760 | 5.6553 | - |
| 2.1672 | 39770 | 6.0511 | - |
| 2.1677 | 39780 | 5.7536 | - |
| 2.1683 | 39790 | 5.6023 | - |
| 2.1688 | 39800 | 5.5776 | - |
| 2.1694 | 39810 | 5.633 | - |
| 2.1699 | 39820 | 5.4637 | - |
| 2.1705 | 39830 | 5.6016 | - |
| 2.1710 | 39840 | 5.3712 | - |
| 2.1715 | 39850 | 5.6851 | - |
| 2.1721 | 39860 | 5.7367 | - |
| 2.1726 | 39870 | 5.7432 | - |
| 2.1732 | 39880 | 5.6845 | - |
| 2.1737 | 39890 | 5.6255 | - |
| 2.1743 | 39900 | 5.6885 | - |
| 2.1748 | 39910 | 5.7249 | - |
| 2.1754 | 39920 | 5.7733 | - |
| 2.1759 | 39930 | 5.6314 | - |
| 2.1764 | 39940 | 5.701 | - |
| 2.1770 | 39950 | 5.5479 | - |
| 2.1775 | 39960 | 5.5841 | - |
| 2.1781 | 39970 | 5.6808 | - |
| 2.1786 | 39980 | 5.297 | - |
| 2.1792 | 39990 | 5.7787 | - |
| 2.1797 | 40000 | 5.4262 | 0.6923 |
| 2.1803 | 40010 | 5.4094 | - |
| 2.1808 | 40020 | 5.6077 | - |
| 2.1814 | 40030 | 5.7238 | - |
| 2.1819 | 40040 | 5.6333 | - |
| 2.1824 | 40050 | 5.9433 | - |
| 2.1830 | 40060 | 5.7621 | - |
| 2.1835 | 40070 | 5.7663 | - |
| 2.1841 | 40080 | 5.6947 | - |
| 2.1846 | 40090 | 5.6844 | - |
| 2.1852 | 40100 | 5.4874 | - |
| 2.1857 | 40110 | 5.5102 | - |
| 2.1863 | 40120 | 5.685 | - |
| 2.1868 | 40130 | 5.5805 | - |
| 2.1873 | 40140 | 5.5557 | - |
| 2.1879 | 40150 | 5.539 | - |
| 2.1884 | 40160 | 5.7047 | - |
| 2.1890 | 40170 | 5.5921 | - |
| 2.1895 | 40180 | 5.5988 | - |
| 2.1901 | 40190 | 5.3031 | - |
| 2.1906 | 40200 | 5.7491 | - |
| 2.1912 | 40210 | 5.7018 | - |
| 2.1917 | 40220 | 5.6352 | - |
| 2.1923 | 40230 | 5.9777 | - |
| 2.1928 | 40240 | 5.4753 | - |
| 2.1933 | 40250 | 5.6211 | - |
| 2.1939 | 40260 | 5.8203 | - |
| 2.1944 | 40270 | 5.6589 | - |
| 2.1950 | 40280 | 5.6274 | - |
| 2.1955 | 40290 | 5.3848 | - |
| 2.1961 | 40300 | 5.4454 | - |
| 2.1966 | 40310 | 5.5903 | - |
| 2.1972 | 40320 | 5.5291 | - |
| 2.1977 | 40330 | 5.5607 | - |
| 2.1982 | 40340 | 5.5985 | - |
| 2.1988 | 40350 | 5.6311 | - |
| 2.1993 | 40360 | 5.4996 | - |
| 2.1999 | 40370 | 5.6153 | - |
| 2.2004 | 40380 | 5.739 | - |
| 2.2010 | 40390 | 5.8021 | - |
| 2.2015 | 40400 | 5.6778 | - |
| 2.2021 | 40410 | 5.6277 | - |
| 2.2026 | 40420 | 5.5079 | - |
| 2.2031 | 40430 | 5.7032 | - |
| 2.2037 | 40440 | 5.7699 | - |
| 2.2042 | 40450 | 5.6696 | - |
| 2.2048 | 40460 | 5.5753 | - |
| 2.2053 | 40470 | 5.4531 | - |
| 2.2059 | 40480 | 5.4323 | - |
| 2.2064 | 40490 | 5.7722 | - |
| 2.2070 | 40500 | 5.4933 | - |
| 2.2075 | 40510 | 5.7261 | - |
| 2.2081 | 40520 | 5.6455 | - |
| 2.2086 | 40530 | 5.6134 | - |
| 2.2091 | 40540 | 5.7554 | - |
| 2.2097 | 40550 | 5.7807 | - |
| 2.2102 | 40560 | 5.4852 | - |
| 2.2108 | 40570 | 5.7689 | - |
| 2.2113 | 40580 | 5.5298 | - |
| 2.2119 | 40590 | 5.7152 | - |
| 2.2124 | 40600 | 5.5423 | - |
| 2.2130 | 40610 | 5.7841 | - |
| 2.2135 | 40620 | 5.5857 | - |
| 2.2140 | 40630 | 5.4435 | - |
| 2.2146 | 40640 | 5.822 | - |
| 2.2151 | 40650 | 5.3782 | - |
| 2.2157 | 40660 | 5.4891 | - |
| 2.2162 | 40670 | 5.6373 | - |
| 2.2168 | 40680 | 5.6987 | - |
| 2.2173 | 40690 | 5.6895 | - |
| 2.2179 | 40700 | 5.7078 | - |
| 2.2184 | 40710 | 5.7001 | - |
| 2.2190 | 40720 | 5.7431 | - |
| 2.2195 | 40730 | 5.7076 | - |
| 2.2200 | 40740 | 5.6395 | - |
| 2.2206 | 40750 | 5.5296 | - |
| 2.2211 | 40760 | 5.8378 | - |
| 2.2217 | 40770 | 5.5987 | - |
| 2.2222 | 40780 | 5.603 | - |
| 2.2228 | 40790 | 5.7256 | - |
| 2.2233 | 40800 | 5.7419 | - |
| 2.2239 | 40810 | 5.4552 | - |
| 2.2244 | 40820 | 5.5118 | - |
| 2.2249 | 40830 | 5.6432 | - |
| 2.2255 | 40840 | 5.7273 | - |
| 2.2260 | 40850 | 5.6025 | - |
| 2.2266 | 40860 | 5.4365 | - |
| 2.2271 | 40870 | 5.4854 | - |
| 2.2277 | 40880 | 5.6309 | - |
| 2.2282 | 40890 | 5.6332 | - |
| 2.2288 | 40900 | 5.4576 | - |
| 2.2293 | 40910 | 5.5968 | - |
| 2.2299 | 40920 | 5.7093 | - |
| 2.2304 | 40930 | 5.6459 | - |
| 2.2309 | 40940 | 5.6803 | - |
| 2.2315 | 40950 | 5.3617 | - |
| 2.2320 | 40960 | 5.9951 | - |
| 2.2326 | 40970 | 5.571 | - |
| 2.2331 | 40980 | 5.6629 | - |
| 2.2337 | 40990 | 5.791 | - |
| 2.2342 | 41000 | 5.6624 | 0.6882 |
| 2.2348 | 41010 | 5.8321 | - |
| 2.2353 | 41020 | 5.6031 | - |
| 2.2358 | 41030 | 5.5991 | - |
| 2.2364 | 41040 | 5.7008 | - |
| 2.2369 | 41050 | 5.7369 | - |
| 2.2375 | 41060 | 5.7168 | - |
| 2.2380 | 41070 | 5.7321 | - |
| 2.2386 | 41080 | 5.5646 | - |
| 2.2391 | 41090 | 5.6073 | - |
| 2.2397 | 41100 | 5.4606 | - |
| 2.2402 | 41110 | 5.6959 | - |
| 2.2407 | 41120 | 5.5593 | - |
| 2.2413 | 41130 | 5.6686 | - |
| 2.2418 | 41140 | 5.7371 | - |
| 2.2424 | 41150 | 5.7542 | - |
| 2.2429 | 41160 | 5.6539 | - |
| 2.2435 | 41170 | 5.6291 | - |
| 2.2440 | 41180 | 5.2459 | - |
| 2.2446 | 41190 | 5.6759 | - |
| 2.2451 | 41200 | 5.6909 | - |
| 2.2457 | 41210 | 5.7473 | - |
| 2.2462 | 41220 | 5.525 | - |
| 2.2467 | 41230 | 5.7351 | - |
| 2.2473 | 41240 | 5.5452 | - |
| 2.2478 | 41250 | 5.5482 | - |
| 2.2484 | 41260 | 5.5519 | - |
| 2.2489 | 41270 | 5.5107 | - |
| 2.2495 | 41280 | 5.538 | - |
| 2.2500 | 41290 | 5.5844 | - |
| 2.2506 | 41300 | 5.7126 | - |
| 2.2511 | 41310 | 5.645 | - |
| 2.2516 | 41320 | 5.6535 | - |
| 2.2522 | 41330 | 5.5748 | - |
| 2.2527 | 41340 | 5.6756 | - |
| 2.2533 | 41350 | 5.3986 | - |
| 2.2538 | 41360 | 5.3662 | - |
| 2.2544 | 41370 | 5.3137 | - |
| 2.2549 | 41380 | 5.5062 | - |
| 2.2555 | 41390 | 5.5293 | - |
| 2.2560 | 41400 | 5.7848 | - |
| 2.2566 | 41410 | 5.7028 | - |
| 2.2571 | 41420 | 5.5067 | - |
| 2.2576 | 41430 | 5.5663 | - |
| 2.2582 | 41440 | 5.485 | - |
| 2.2587 | 41450 | 5.7028 | - |
| 2.2593 | 41460 | 5.6952 | - |
| 2.2598 | 41470 | 5.6463 | - |
| 2.2604 | 41480 | 5.6894 | - |
| 2.2609 | 41490 | 5.5836 | - |
| 2.2615 | 41500 | 5.6541 | - |
| 2.2620 | 41510 | 5.4804 | - |
| 2.2625 | 41520 | 5.5554 | - |
| 2.2631 | 41530 | 5.4708 | - |
| 2.2636 | 41540 | 5.5211 | - |
| 2.2642 | 41550 | 5.5447 | - |
| 2.2647 | 41560 | 5.8167 | - |
| 2.2653 | 41570 | 5.5624 | - |
| 2.2658 | 41580 | 5.4211 | - |
| 2.2664 | 41590 | 5.9176 | - |
| 2.2669 | 41600 | 5.5445 | - |
| 2.2675 | 41610 | 5.5662 | - |
| 2.2680 | 41620 | 5.3586 | - |
| 2.2685 | 41630 | 5.6227 | - |
| 2.2691 | 41640 | 5.8398 | - |
| 2.2696 | 41650 | 5.7433 | - |
| 2.2702 | 41660 | 5.6354 | - |
| 2.2707 | 41670 | 5.6215 | - |
| 2.2713 | 41680 | 5.5432 | - |
| 2.2718 | 41690 | 5.5731 | - |
| 2.2724 | 41700 | 5.6231 | - |
| 2.2729 | 41710 | 5.7351 | - |
| 2.2734 | 41720 | 5.7066 | - |
| 2.2740 | 41730 | 5.3475 | - |
| 2.2745 | 41740 | 5.531 | - |
| 2.2751 | 41750 | 5.6704 | - |
| 2.2756 | 41760 | 5.7091 | - |
| 2.2762 | 41770 | 5.4904 | - |
| 2.2767 | 41780 | 5.7339 | - |
| 2.2773 | 41790 | 5.5576 | - |
| 2.2778 | 41800 | 5.6058 | - |
| 2.2783 | 41810 | 5.7388 | - |
| 2.2789 | 41820 | 5.7356 | - |
| 2.2794 | 41830 | 5.6892 | - |
| 2.2800 | 41840 | 5.7691 | - |
| 2.2805 | 41850 | 5.746 | - |
| 2.2811 | 41860 | 5.5248 | - |
| 2.2816 | 41870 | 5.3828 | - |
| 2.2822 | 41880 | 5.5342 | - |
| 2.2827 | 41890 | 5.65 | - |
| 2.2833 | 41900 | 5.6627 | - |
| 2.2838 | 41910 | 5.7146 | - |
| 2.2843 | 41920 | 5.6571 | - |
| 2.2849 | 41930 | 5.6425 | - |
| 2.2854 | 41940 | 5.6897 | - |
| 2.2860 | 41950 | 5.5007 | - |
| 2.2865 | 41960 | 5.3832 | - |
| 2.2871 | 41970 | 5.3857 | - |
| 2.2876 | 41980 | 5.6623 | - |
| 2.2882 | 41990 | 5.7058 | - |
| 2.2887 | 42000 | 5.6794 | 0.6851 |
| 2.2892 | 42010 | 5.6877 | - |
| 2.2898 | 42020 | 5.734 | - |
| 2.2903 | 42030 | 5.7303 | - |
| 2.2909 | 42040 | 5.7311 | - |
| 2.2914 | 42050 | 5.665 | - |
| 2.2920 | 42060 | 5.4208 | - |
| 2.2925 | 42070 | 5.6013 | - |
| 2.2931 | 42080 | 5.4601 | - |
| 2.2936 | 42090 | 5.5149 | - |
| 2.2942 | 42100 | 5.5576 | - |
| 2.2947 | 42110 | 5.7902 | - |
| 2.2952 | 42120 | 5.7244 | - |
| 2.2958 | 42130 | 5.6063 | - |
| 2.2963 | 42140 | 5.8756 | - |
| 2.2969 | 42150 | 5.5716 | - |
| 2.2974 | 42160 | 5.5946 | - |
| 2.2980 | 42170 | 5.5641 | - |
| 2.2985 | 42180 | 5.6591 | - |
| 2.2991 | 42190 | 5.6921 | - |
| 2.2996 | 42200 | 5.736 | - |
| 2.3001 | 42210 | 5.6126 | - |
| 2.3007 | 42220 | 5.572 | - |
| 2.3012 | 42230 | 5.4271 | - |
| 2.3018 | 42240 | 5.7532 | - |
| 2.3023 | 42250 | 5.712 | - |
| 2.3029 | 42260 | 5.5955 | - |
| 2.3034 | 42270 | 5.5423 | - |
| 2.3040 | 42280 | 5.6957 | - |
| 2.3045 | 42290 | 5.6509 | - |
| 2.3051 | 42300 | 5.6888 | - |
| 2.3056 | 42310 | 5.6009 | - |
| 2.3061 | 42320 | 5.4549 | - |
| 2.3067 | 42330 | 5.6969 | - |
| 2.3072 | 42340 | 5.5893 | - |
| 2.3078 | 42350 | 5.6683 | - |
| 2.3083 | 42360 | 5.5883 | - |
| 2.3089 | 42370 | 5.6558 | - |
| 2.3094 | 42380 | 5.6439 | - |
| 2.3100 | 42390 | 5.5007 | - |
| 2.3105 | 42400 | 5.6713 | - |
| 2.3110 | 42410 | 5.5297 | - |
| 2.3116 | 42420 | 5.755 | - |
| 2.3121 | 42430 | 5.6727 | - |
| 2.3127 | 42440 | 5.6703 | - |
| 2.3132 | 42450 | 5.5932 | - |
| 2.3138 | 42460 | 5.3272 | - |
| 2.3143 | 42470 | 5.3739 | - |
| 2.3149 | 42480 | 5.4624 | - |
| 2.3154 | 42490 | 5.8248 | - |
| 2.3160 | 42500 | 5.6275 | - |
| 2.3165 | 42510 | 5.7717 | - |
| 2.3170 | 42520 | 5.6283 | - |
| 2.3176 | 42530 | 5.5101 | - |
| 2.3181 | 42540 | 5.5411 | - |
| 2.3187 | 42550 | 5.7347 | - |
| 2.3192 | 42560 | 5.7873 | - |
| 2.3198 | 42570 | 5.6621 | - |
| 2.3203 | 42580 | 5.7441 | - |
| 2.3209 | 42590 | 5.6855 | - |
| 2.3214 | 42600 | 5.5955 | - |
| 2.3219 | 42610 | 5.6663 | - |
| 2.3225 | 42620 | 5.5525 | - |
| 2.3230 | 42630 | 5.64 | - |
| 2.3236 | 42640 | 5.6122 | - |
| 2.3241 | 42650 | 5.7979 | - |
| 2.3247 | 42660 | 5.4156 | - |
| 2.3252 | 42670 | 5.614 | - |
| 2.3258 | 42680 | 5.6416 | - |
| 2.3263 | 42690 | 5.8205 | - |
| 2.3268 | 42700 | 5.6283 | - |
| 2.3274 | 42710 | 5.6047 | - |
| 2.3279 | 42720 | 5.5952 | - |
| 2.3285 | 42730 | 5.6641 | - |
| 2.3290 | 42740 | 5.6974 | - |
| 2.3296 | 42750 | 5.8513 | - |
| 2.3301 | 42760 | 5.3862 | - |
| 2.3307 | 42770 | 5.6832 | - |
| 2.3312 | 42780 | 5.5902 | - |
| 2.3318 | 42790 | 5.7685 | - |
| 2.3323 | 42800 | 5.7089 | - |
| 2.3328 | 42810 | 5.6593 | - |
| 2.3334 | 42820 | 5.5891 | - |
| 2.3339 | 42830 | 5.9729 | - |
| 2.3345 | 42840 | 5.6837 | - |
| 2.3350 | 42850 | 5.4101 | - |
| 2.3356 | 42860 | 5.6943 | - |
| 2.3361 | 42870 | 5.6204 | - |
| 2.3367 | 42880 | 5.7387 | - |
| 2.3372 | 42890 | 5.677 | - |
| 2.3377 | 42900 | 5.7026 | - |
| 2.3383 | 42910 | 5.9261 | - |
| 2.3388 | 42920 | 5.7383 | - |
| 2.3394 | 42930 | 5.6161 | - |
| 2.3399 | 42940 | 5.4537 | - |
| 2.3405 | 42950 | 5.4714 | - |
| 2.3410 | 42960 | 5.7107 | - |
| 2.3416 | 42970 | 5.5432 | - |
| 2.3421 | 42980 | 5.4782 | - |
| 2.3427 | 42990 | 5.5604 | - |
| 2.3432 | 43000 | 5.568 | 0.6877 |
| 2.3437 | 43010 | 5.5667 | - |
| 2.3443 | 43020 | 5.7016 | - |
| 2.3448 | 43030 | 5.6895 | - |
| 2.3454 | 43040 | 5.5405 | - |
| 2.3459 | 43050 | 5.7829 | - |
| 2.3465 | 43060 | 5.5215 | - |
| 2.3470 | 43070 | 5.54 | - |
| 2.3476 | 43080 | 5.5845 | - |
| 2.3481 | 43090 | 5.6009 | - |
| 2.3486 | 43100 | 5.6288 | - |
| 2.3492 | 43110 | 5.4734 | - |
| 2.3497 | 43120 | 5.7664 | - |
| 2.3503 | 43130 | 5.6491 | - |
| 2.3508 | 43140 | 5.5911 | - |
| 2.3514 | 43150 | 5.6866 | - |
| 2.3519 | 43160 | 5.6715 | - |
| 2.3525 | 43170 | 5.6194 | - |
| 2.3530 | 43180 | 5.799 | - |
| 2.3536 | 43190 | 5.6177 | - |
| 2.3541 | 43200 | 5.6639 | - |
| 2.3546 | 43210 | 5.8817 | - |
| 2.3552 | 43220 | 5.4939 | - |
| 2.3557 | 43230 | 5.4568 | - |
| 2.3563 | 43240 | 5.4516 | - |
| 2.3568 | 43250 | 5.7113 | - |
| 2.3574 | 43260 | 5.7576 | - |
| 2.3579 | 43270 | 5.3581 | - |
| 2.3585 | 43280 | 5.6111 | - |
| 2.3590 | 43290 | 6.0416 | - |
| 2.3595 | 43300 | 5.7466 | - |
| 2.3601 | 43310 | 5.6168 | - |
| 2.3606 | 43320 | 5.5897 | - |
| 2.3612 | 43330 | 5.601 | - |
| 2.3617 | 43340 | 5.7339 | - |
| 2.3623 | 43350 | 5.46 | - |
| 2.3628 | 43360 | 5.684 | - |
| 2.3634 | 43370 | 5.723 | - |
| 2.3639 | 43380 | 5.4163 | - |
| 2.3644 | 43390 | 5.5272 | - |
| 2.3650 | 43400 | 5.7903 | - |
| 2.3655 | 43410 | 5.5416 | - |
| 2.3661 | 43420 | 5.6588 | - |
| 2.3666 | 43430 | 5.4845 | - |
| 2.3672 | 43440 | 5.3677 | - |
| 2.3677 | 43450 | 5.6459 | - |
| 2.3683 | 43460 | 5.6382 | - |
| 2.3688 | 43470 | 5.5929 | - |
| 2.3694 | 43480 | 5.5236 | - |
| 2.3699 | 43490 | 5.6309 | - |
| 2.3704 | 43500 | 5.5384 | - |
| 2.3710 | 43510 | 5.472 | - |
| 2.3715 | 43520 | 5.5163 | - |
| 2.3721 | 43530 | 5.7005 | - |
| 2.3726 | 43540 | 5.8595 | - |
| 2.3732 | 43550 | 5.6177 | - |
| 2.3737 | 43560 | 5.6774 | - |
| 2.3743 | 43570 | 5.5655 | - |
| 2.3748 | 43580 | 5.8677 | - |
| 2.3753 | 43590 | 5.6372 | - |
| 2.3759 | 43600 | 5.4423 | - |
| 2.3764 | 43610 | 5.5105 | - |
| 2.3770 | 43620 | 5.6519 | - |
| 2.3775 | 43630 | 5.5341 | - |
| 2.3781 | 43640 | 5.5975 | - |
| 2.3786 | 43650 | 5.539 | - |
| 2.3792 | 43660 | 5.5751 | - |
| 2.3797 | 43670 | 5.5895 | - |
| 2.3803 | 43680 | 5.5958 | - |
| 2.3808 | 43690 | 5.8205 | - |
| 2.3813 | 43700 | 5.6072 | - |
| 2.3819 | 43710 | 5.8326 | - |
| 2.3824 | 43720 | 5.7305 | - |
| 2.3830 | 43730 | 5.6562 | - |
| 2.3835 | 43740 | 5.7647 | - |
| 2.3841 | 43750 | 5.5064 | - |
| 2.3846 | 43760 | 5.5406 | - |
| 2.3852 | 43770 | 5.7086 | - |
| 2.3857 | 43780 | 5.5401 | - |
| 2.3862 | 43790 | 5.4847 | - |
| 2.3868 | 43800 | 5.6436 | - |
| 2.3873 | 43810 | 5.7472 | - |
| 2.3879 | 43820 | 5.5023 | - |
| 2.3884 | 43830 | 5.3824 | - |
| 2.3890 | 43840 | 5.6073 | - |
| 2.3895 | 43850 | 5.5363 | - |
| 2.3901 | 43860 | 5.864 | - |
| 2.3906 | 43870 | 5.5786 | - |
| 2.3912 | 43880 | 5.6143 | - |
| 2.3917 | 43890 | 5.5041 | - |
| 2.3922 | 43900 | 5.5597 | - |
| 2.3928 | 43910 | 5.6941 | - |
| 2.3933 | 43920 | 5.4072 | - |
| 2.3939 | 43930 | 5.5803 | - |
| 2.3944 | 43940 | 5.5385 | - |
| 2.3950 | 43950 | 5.7675 | - |
| 2.3955 | 43960 | 5.5093 | - |
| 2.3961 | 43970 | 5.6003 | - |
| 2.3966 | 43980 | 5.2459 | - |
| 2.3971 | 43990 | 5.717 | - |
| 2.3977 | 44000 | 5.8334 | 0.6880 |
| 2.3982 | 44010 | 5.6249 | - |
| 2.3988 | 44020 | 5.5005 | - |
| 2.3993 | 44030 | 5.5341 | - |
| 2.3999 | 44040 | 5.5919 | - |
| 2.4004 | 44050 | 5.7939 | - |
| 2.4010 | 44060 | 5.7394 | - |
| 2.4015 | 44070 | 5.5959 | - |
| 2.4020 | 44080 | 5.516 | - |
| 2.4026 | 44090 | 5.51 | - |
| 2.4031 | 44100 | 5.7165 | - |
| 2.4037 | 44110 | 5.7261 | - |
| 2.4042 | 44120 | 5.5268 | - |
| 2.4048 | 44130 | 5.752 | - |
| 2.4053 | 44140 | 5.6608 | - |
| 2.4059 | 44150 | 5.5729 | - |
| 2.4064 | 44160 | 5.6742 | - |
| 2.4070 | 44170 | 5.7103 | - |
| 2.4075 | 44180 | 5.566 | - |
| 2.4080 | 44190 | 5.5075 | - |
| 2.4086 | 44200 | 5.6042 | - |
| 2.4091 | 44210 | 5.6007 | - |
| 2.4097 | 44220 | 5.6953 | - |
| 2.4102 | 44230 | 5.6037 | - |
| 2.4108 | 44240 | 5.4615 | - |
| 2.4113 | 44250 | 5.3852 | - |
| 2.4119 | 44260 | 5.7092 | - |
| 2.4124 | 44270 | 5.8602 | - |
| 2.4129 | 44280 | 5.6753 | - |
| 2.4135 | 44290 | 5.7241 | - |
| 2.4140 | 44300 | 5.7057 | - |
| 2.4146 | 44310 | 5.4268 | - |
| 2.4151 | 44320 | 5.6403 | - |
| 2.4157 | 44330 | 5.5764 | - |
| 2.4162 | 44340 | 5.6412 | - |
| 2.4168 | 44350 | 5.511 | - |
| 2.4173 | 44360 | 5.8684 | - |
| 2.4179 | 44370 | 5.6556 | - |
| 2.4184 | 44380 | 5.5412 | - |
| 2.4189 | 44390 | 5.6368 | - |
| 2.4195 | 44400 | 5.7274 | - |
| 2.4200 | 44410 | 5.7313 | - |
| 2.4206 | 44420 | 5.6583 | - |
| 2.4211 | 44430 | 5.8873 | - |
| 2.4217 | 44440 | 5.3933 | - |
| 2.4222 | 44450 | 5.6382 | - |
| 2.4228 | 44460 | 5.6594 | - |
| 2.4233 | 44470 | 5.7842 | - |
| 2.4238 | 44480 | 5.5793 | - |
| 2.4244 | 44490 | 5.7134 | - |
| 2.4249 | 44500 | 5.9235 | - |
| 2.4255 | 44510 | 5.7796 | - |
| 2.4260 | 44520 | 5.5645 | - |
| 2.4266 | 44530 | 5.5321 | - |
| 2.4271 | 44540 | 5.5437 | - |
| 2.4277 | 44550 | 5.6134 | - |
| 2.4282 | 44560 | 5.5769 | - |
| 2.4288 | 44570 | 5.7436 | - |
| 2.4293 | 44580 | 5.6834 | - |
| 2.4298 | 44590 | 5.6178 | - |
| 2.4304 | 44600 | 5.6233 | - |
| 2.4309 | 44610 | 5.5168 | - |
| 2.4315 | 44620 | 5.5355 | - |
| 2.4320 | 44630 | 5.5222 | - |
| 2.4326 | 44640 | 5.6612 | - |
| 2.4331 | 44650 | 5.684 | - |
| 2.4337 | 44660 | 5.4755 | - |
| 2.4342 | 44670 | 5.7759 | - |
| 2.4347 | 44680 | 5.4854 | - |
| 2.4353 | 44690 | 5.3062 | - |
| 2.4358 | 44700 | 5.7561 | - |
| 2.4364 | 44710 | 5.6959 | - |
| 2.4369 | 44720 | 5.5721 | - |
| 2.4375 | 44730 | 5.5745 | - |
| 2.4380 | 44740 | 5.637 | - |
| 2.4386 | 44750 | 5.6719 | - |
| 2.4391 | 44760 | 5.6581 | - |
| 2.4396 | 44770 | 5.6352 | - |
| 2.4402 | 44780 | 5.7957 | - |
| 2.4407 | 44790 | 5.8771 | - |
| 2.4413 | 44800 | 5.2402 | - |
| 2.4418 | 44810 | 5.6978 | - |
| 2.4424 | 44820 | 5.6714 | - |
| 2.4429 | 44830 | 5.426 | - |
| 2.4435 | 44840 | 5.4173 | - |
| 2.4440 | 44850 | 5.636 | - |
| 2.4446 | 44860 | 5.4737 | - |
| 2.4451 | 44870 | 5.7138 | - |
| 2.4456 | 44880 | 5.4977 | - |
| 2.4462 | 44890 | 5.6294 | - |
| 2.4467 | 44900 | 5.8386 | - |
| 2.4473 | 44910 | 5.5915 | - |
| 2.4478 | 44920 | 5.6885 | - |
| 2.4484 | 44930 | 5.5086 | - |
| 2.4489 | 44940 | 5.666 | - |
| 2.4495 | 44950 | 5.3737 | - |
| 2.4500 | 44960 | 5.5158 | - |
| 2.4505 | 44970 | 5.4623 | - |
| 2.4511 | 44980 | 5.7329 | - |
| 2.4516 | 44990 | 5.4425 | - |
| 2.4522 | 45000 | 5.6319 | 0.6834 |
| 2.4527 | 45010 | 5.6371 | - |
| 2.4533 | 45020 | 5.5228 | - |
| 2.4538 | 45030 | 5.8272 | - |
| 2.4544 | 45040 | 5.5642 | - |
| 2.4549 | 45050 | 5.4416 | - |
| 2.4555 | 45060 | 5.396 | - |
| 2.4560 | 45070 | 5.4522 | - |
| 2.4565 | 45080 | 5.5455 | - |
| 2.4571 | 45090 | 5.5129 | - |
| 2.4576 | 45100 | 5.5823 | - |
| 2.4582 | 45110 | 5.7072 | - |
| 2.4587 | 45120 | 5.6942 | - |
| 2.4593 | 45130 | 5.5889 | - |
| 2.4598 | 45140 | 5.4825 | - |
| 2.4604 | 45150 | 5.6158 | - |
| 2.4609 | 45160 | 5.5905 | - |
| 2.4614 | 45170 | 5.6247 | - |
| 2.4620 | 45180 | 5.7498 | - |
| 2.4625 | 45190 | 5.6989 | - |
| 2.4631 | 45200 | 5.6334 | - |
| 2.4636 | 45210 | 5.7917 | - |
| 2.4642 | 45220 | 5.6091 | - |
| 2.4647 | 45230 | 5.4901 | - |
| 2.4653 | 45240 | 5.8652 | - |
| 2.4658 | 45250 | 5.5782 | - |
| 2.4664 | 45260 | 5.8102 | - |
| 2.4669 | 45270 | 5.548 | - |
| 2.4674 | 45280 | 5.751 | - |
| 2.4680 | 45290 | 5.5368 | - |
| 2.4685 | 45300 | 5.5754 | - |
| 2.4691 | 45310 | 5.6208 | - |
| 2.4696 | 45320 | 5.4257 | - |
| 2.4702 | 45330 | 5.5361 | - |
| 2.4707 | 45340 | 5.5155 | - |
| 2.4713 | 45350 | 5.3894 | - |
| 2.4718 | 45360 | 5.7436 | - |
| 2.4723 | 45370 | 5.559 | - |
| 2.4729 | 45380 | 5.5975 | - |
| 2.4734 | 45390 | 5.7832 | - |
| 2.4740 | 45400 | 5.853 | - |
| 2.4745 | 45410 | 5.5288 | - |
| 2.4751 | 45420 | 5.5407 | - |
| 2.4756 | 45430 | 6.0423 | - |
| 2.4762 | 45440 | 5.4879 | - |
| 2.4767 | 45450 | 5.4868 | - |
| 2.4772 | 45460 | 5.6434 | - |
| 2.4778 | 45470 | 5.7398 | - |
| 2.4783 | 45480 | 5.5451 | - |
| 2.4789 | 45490 | 5.5281 | - |
| 2.4794 | 45500 | 5.3099 | - |
| 2.4800 | 45510 | 5.686 | - |
| 2.4805 | 45520 | 5.9396 | - |
| 2.4811 | 45530 | 5.7689 | - |
| 2.4816 | 45540 | 5.5568 | - |
| 2.4822 | 45550 | 5.5218 | - |
| 2.4827 | 45560 | 5.6319 | - |
| 2.4832 | 45570 | 5.4106 | - |
| 2.4838 | 45580 | 5.6486 | - |
| 2.4843 | 45590 | 5.5975 | - |
| 2.4849 | 45600 | 5.5493 | - |
| 2.4854 | 45610 | 5.586 | - |
| 2.4860 | 45620 | 5.6439 | - |
| 2.4865 | 45630 | 5.5417 | - |
| 2.4871 | 45640 | 5.4202 | - |
| 2.4876 | 45650 | 5.8667 | - |
| 2.4881 | 45660 | 5.7643 | - |
| 2.4887 | 45670 | 5.5105 | - |
| 2.4892 | 45680 | 5.586 | - |
| 2.4898 | 45690 | 5.7374 | - |
| 2.4903 | 45700 | 5.5061 | - |
| 2.4909 | 45710 | 5.7458 | - |
| 2.4914 | 45720 | 5.519 | - |
| 2.4920 | 45730 | 5.6232 | - |
| 2.4925 | 45740 | 5.6588 | - |
| 2.4931 | 45750 | 5.5583 | - |
| 2.4936 | 45760 | 5.8431 | - |
| 2.4941 | 45770 | 5.4585 | - |
| 2.4947 | 45780 | 5.6566 | - |
| 2.4952 | 45790 | 5.644 | - |
| 2.4958 | 45800 | 5.6619 | - |
| 2.4963 | 45810 | 5.6046 | - |
| 2.4969 | 45820 | 5.693 | - |
| 2.4974 | 45830 | 5.5361 | - |
| 2.4980 | 45840 | 5.9302 | - |
| 2.4985 | 45850 | 5.6137 | - |
| 2.4990 | 45860 | 5.6762 | - |
| 2.4996 | 45870 | 5.5211 | - |
| 2.5001 | 45880 | 5.602 | - |
| 2.5007 | 45890 | 5.6086 | - |
| 2.5012 | 45900 | 5.6794 | - |
| 2.5018 | 45910 | 5.7094 | - |
| 2.5023 | 45920 | 5.5342 | - |
| 2.5029 | 45930 | 5.5241 | - |
| 2.5034 | 45940 | 5.4483 | - |
| 2.5040 | 45950 | 5.3071 | - |
| 2.5045 | 45960 | 5.5404 | - |
| 2.5050 | 45970 | 5.5007 | - |
| 2.5056 | 45980 | 5.5588 | - |
| 2.5061 | 45990 | 5.4425 | - |
| 2.5067 | 46000 | 5.4335 | 0.6858 |
| 2.5072 | 46010 | 5.7394 | - |
| 2.5078 | 46020 | 5.6954 | - |
| 2.5083 | 46030 | 5.4103 | - |
| 2.5089 | 46040 | 5.5405 | - |
| 2.5094 | 46050 | 5.4619 | - |
| 2.5099 | 46060 | 5.5731 | - |
| 2.5105 | 46070 | 5.6097 | - |
| 2.5110 | 46080 | 5.4368 | - |
| 2.5116 | 46090 | 5.6261 | - |
| 2.5121 | 46100 | 5.506 | - |
| 2.5127 | 46110 | 5.7491 | - |
| 2.5132 | 46120 | 5.4896 | - |
| 2.5138 | 46130 | 5.4954 | - |
| 2.5143 | 46140 | 5.615 | - |
| 2.5148 | 46150 | 5.6319 | - |
| 2.5154 | 46160 | 5.5557 | - |
| 2.5159 | 46170 | 5.8347 | - |
| 2.5165 | 46180 | 5.6106 | - |
| 2.5170 | 46190 | 5.7053 | - |
| 2.5176 | 46200 | 5.4157 | - |
| 2.5181 | 46210 | 5.4968 | - |
| 2.5187 | 46220 | 5.3547 | - |
| 2.5192 | 46230 | 5.5624 | - |
| 2.5198 | 46240 | 5.6473 | - |
| 2.5203 | 46250 | 5.6908 | - |
| 2.5208 | 46260 | 5.678 | - |
| 2.5214 | 46270 | 5.5847 | - |
| 2.5219 | 46280 | 5.6866 | - |
| 2.5225 | 46290 | 5.7646 | - |
| 2.5230 | 46300 | 5.4015 | - |
| 2.5236 | 46310 | 5.4423 | - |
| 2.5241 | 46320 | 5.8332 | - |
| 2.5247 | 46330 | 5.8193 | - |
| 2.5252 | 46340 | 5.7163 | - |
| 2.5257 | 46350 | 5.4822 | - |
| 2.5263 | 46360 | 5.6712 | - |
| 2.5268 | 46370 | 5.5764 | - |
| 2.5274 | 46380 | 5.5039 | - |
| 2.5279 | 46390 | 5.385 | - |
| 2.5285 | 46400 | 5.2359 | - |
| 2.5290 | 46410 | 5.6624 | - |
| 2.5296 | 46420 | 5.4424 | - |
| 2.5301 | 46430 | 5.5992 | - |
| 2.5307 | 46440 | 5.5011 | - |
| 2.5312 | 46450 | 5.4497 | - |
| 2.5317 | 46460 | 5.7701 | - |
| 2.5323 | 46470 | 5.6085 | - |
| 2.5328 | 46480 | 5.7529 | - |
| 2.5334 | 46490 | 5.6134 | - |
| 2.5339 | 46500 | 5.6631 | - |
| 2.5345 | 46510 | 5.5167 | - |
| 2.5350 | 46520 | 5.5147 | - |
| 2.5356 | 46530 | 5.4979 | - |
| 2.5361 | 46540 | 5.7691 | - |
| 2.5366 | 46550 | 5.3911 | - |
| 2.5372 | 46560 | 5.7741 | - |
| 2.5377 | 46570 | 5.4222 | - |
| 2.5383 | 46580 | 5.3019 | - |
| 2.5388 | 46590 | 5.5488 | - |
| 2.5394 | 46600 | 5.604 | - |
| 2.5399 | 46610 | 5.6764 | - |
| 2.5405 | 46620 | 5.5005 | - |
| 2.5410 | 46630 | 5.3882 | - |
| 2.5416 | 46640 | 5.8511 | - |
| 2.5421 | 46650 | 5.6307 | - |
| 2.5426 | 46660 | 5.6558 | - |
| 2.5432 | 46670 | 5.5031 | - |
| 2.5437 | 46680 | 5.5702 | - |
| 2.5443 | 46690 | 5.642 | - |
| 2.5448 | 46700 | 5.5332 | - |
| 2.5454 | 46710 | 5.7006 | - |
| 2.5459 | 46720 | 5.5444 | - |
| 2.5465 | 46730 | 5.6363 | - |
| 2.5470 | 46740 | 5.5709 | - |
| 2.5475 | 46750 | 5.5786 | - |
| 2.5481 | 46760 | 5.6433 | - |
| 2.5486 | 46770 | 5.6199 | - |
| 2.5492 | 46780 | 5.5491 | - |
| 2.5497 | 46790 | 5.4449 | - |
| 2.5503 | 46800 | 5.5341 | - |
| 2.5508 | 46810 | 5.6515 | - |
| 2.5514 | 46820 | 5.5755 | - |
| 2.5519 | 46830 | 5.6213 | - |
| 2.5524 | 46840 | 5.557 | - |
| 2.5530 | 46850 | 5.6938 | - |
| 2.5535 | 46860 | 5.5876 | - |
| 2.5541 | 46870 | 5.6846 | - |
| 2.5546 | 46880 | 5.5715 | - |
| 2.5552 | 46890 | 5.5299 | - |
| 2.5557 | 46900 | 5.5799 | - |
| 2.5563 | 46910 | 5.489 | - |
| 2.5568 | 46920 | 5.6815 | - |
| 2.5574 | 46930 | 5.8566 | - |
| 2.5579 | 46940 | 5.48 | - |
| 2.5584 | 46950 | 5.4598 | - |
| 2.5590 | 46960 | 5.4766 | - |
| 2.5595 | 46970 | 5.7147 | - |
| 2.5601 | 46980 | 5.5197 | - |
| 2.5606 | 46990 | 5.402 | - |
| 2.5612 | 47000 | 5.4325 | 0.6861 |
| 2.5617 | 47010 | 5.6366 | - |
| 2.5623 | 47020 | 5.5899 | - |
| 2.5628 | 47030 | 5.4621 | - |
| 2.5633 | 47040 | 5.5114 | - |
| 2.5639 | 47050 | 5.6747 | - |
| 2.5644 | 47060 | 5.7137 | - |
| 2.5650 | 47070 | 5.3358 | - |
| 2.5655 | 47080 | 5.5948 | - |
| 2.5661 | 47090 | 5.5904 | - |
| 2.5666 | 47100 | 5.7217 | - |
| 2.5672 | 47110 | 5.851 | - |
| 2.5677 | 47120 | 5.5338 | - |
| 2.5683 | 47130 | 5.7824 | - |
| 2.5688 | 47140 | 5.5606 | - |
| 2.5693 | 47150 | 5.7327 | - |
| 2.5699 | 47160 | 5.7607 | - |
| 2.5704 | 47170 | 5.6961 | - |
| 2.5710 | 47180 | 5.6654 | - |
| 2.5715 | 47190 | 5.6784 | - |
| 2.5721 | 47200 | 5.6056 | - |
| 2.5726 | 47210 | 5.6939 | - |
| 2.5732 | 47220 | 5.7523 | - |
| 2.5737 | 47230 | 5.6224 | - |
| 2.5742 | 47240 | 5.6308 | - |
| 2.5748 | 47250 | 5.6243 | - |
| 2.5753 | 47260 | 5.4591 | - |
| 2.5759 | 47270 | 5.3276 | - |
| 2.5764 | 47280 | 5.6076 | - |
| 2.5770 | 47290 | 5.6494 | - |
| 2.5775 | 47300 | 5.6398 | - |
| 2.5781 | 47310 | 5.5294 | - |
| 2.5786 | 47320 | 5.6017 | - |
| 2.5792 | 47330 | 5.6524 | - |
| 2.5797 | 47340 | 5.7381 | - |
| 2.5802 | 47350 | 5.6114 | - |
| 2.5808 | 47360 | 5.5766 | - |
| 2.5813 | 47370 | 5.5481 | - |
| 2.5819 | 47380 | 5.7142 | - |
| 2.5824 | 47390 | 5.486 | - |
| 2.5830 | 47400 | 5.6324 | - |
| 2.5835 | 47410 | 5.4779 | - |
| 2.5841 | 47420 | 5.2947 | - |
| 2.5846 | 47430 | 5.7644 | - |
| 2.5851 | 47440 | 5.7075 | - |
| 2.5857 | 47450 | 5.6067 | - |
| 2.5862 | 47460 | 5.4632 | - |
| 2.5868 | 47470 | 5.5574 | - |
| 2.5873 | 47480 | 5.5809 | - |
| 2.5879 | 47490 | 5.542 | - |
| 2.5884 | 47500 | 5.7822 | - |
| 2.5890 | 47510 | 5.6565 | - |
| 2.5895 | 47520 | 5.624 | - |
| 2.5900 | 47530 | 5.8231 | - |
| 2.5906 | 47540 | 5.6945 | - |
| 2.5911 | 47550 | 5.6096 | - |
| 2.5917 | 47560 | 5.5702 | - |
| 2.5922 | 47570 | 5.6106 | - |
| 2.5928 | 47580 | 5.6017 | - |
| 2.5933 | 47590 | 5.595 | - |
| 2.5939 | 47600 | 5.5047 | - |
| 2.5944 | 47610 | 5.6055 | - |
| 2.5950 | 47620 | 5.4529 | - |
| 2.5955 | 47630 | 5.7898 | - |
| 2.5960 | 47640 | 5.7191 | - |
| 2.5966 | 47650 | 5.6639 | - |
| 2.5971 | 47660 | 5.6016 | - |
| 2.5977 | 47670 | 5.7397 | - |
| 2.5982 | 47680 | 5.7981 | - |
| 2.5988 | 47690 | 5.4811 | - |
| 2.5993 | 47700 | 5.6526 | - |
| 2.5999 | 47710 | 5.7343 | - |
| 2.6004 | 47720 | 5.6324 | - |
| 2.6009 | 47730 | 5.3765 | - |
| 2.6015 | 47740 | 5.4636 | - |
| 2.6020 | 47750 | 5.4594 | - |
| 2.6026 | 47760 | 5.6261 | - |
| 2.6031 | 47770 | 5.8096 | - |
| 2.6037 | 47780 | 5.7972 | - |
| 2.6042 | 47790 | 5.4864 | - |
| 2.6048 | 47800 | 5.8957 | - |
| 2.6053 | 47810 | 5.5761 | - |
| 2.6059 | 47820 | 5.4159 | - |
| 2.6064 | 47830 | 5.6258 | - |
| 2.6069 | 47840 | 5.818 | - |
| 2.6075 | 47850 | 5.7123 | - |
| 2.6080 | 47860 | 5.5842 | - |
| 2.6086 | 47870 | 5.6495 | - |
| 2.6091 | 47880 | 5.4455 | - |
| 2.6097 | 47890 | 5.8055 | - |
| 2.6102 | 47900 | 5.4652 | - |
| 2.6108 | 47910 | 5.6072 | - |
| 2.6113 | 47920 | 5.6299 | - |
| 2.6118 | 47930 | 5.5595 | - |
| 2.6124 | 47940 | 5.6684 | - |
| 2.6129 | 47950 | 5.7884 | - |
| 2.6135 | 47960 | 5.69 | - |
| 2.6140 | 47970 | 5.389 | - |
| 2.6146 | 47980 | 5.7626 | - |
| 2.6151 | 47990 | 5.6229 | - |
| 2.6157 | 48000 | 5.6437 | 0.6828 |
| 2.6162 | 48010 | 5.5516 | - |
| 2.6168 | 48020 | 5.6274 | - |
| 2.6173 | 48030 | 5.3571 | - |
| 2.6178 | 48040 | 5.4431 | - |
| 2.6184 | 48050 | 5.9077 | - |
| 2.6189 | 48060 | 5.6402 | - |
| 2.6195 | 48070 | 5.5494 | - |
| 2.6200 | 48080 | 5.5864 | - |
| 2.6206 | 48090 | 5.585 | - |
| 2.6211 | 48100 | 5.6354 | - |
| 2.6217 | 48110 | 5.7883 | - |
| 2.6222 | 48120 | 5.6877 | - |
| 2.6227 | 48130 | 5.499 | - |
| 2.6233 | 48140 | 5.6448 | - |
| 2.6238 | 48150 | 5.605 | - |
| 2.6244 | 48160 | 5.6265 | - |
| 2.6249 | 48170 | 5.603 | - |
| 2.6255 | 48180 | 5.586 | - |
| 2.6260 | 48190 | 5.4352 | - |
| 2.6266 | 48200 | 5.4799 | - |
| 2.6271 | 48210 | 5.6371 | - |
| 2.6276 | 48220 | 5.4709 | - |
| 2.6282 | 48230 | 5.6958 | - |
| 2.6287 | 48240 | 5.5464 | - |
| 2.6293 | 48250 | 5.802 | - |
| 2.6298 | 48260 | 5.6802 | - |
| 2.6304 | 48270 | 5.5566 | - |
| 2.6309 | 48280 | 5.6315 | - |
| 2.6315 | 48290 | 5.693 | - |
| 2.6320 | 48300 | 5.3432 | - |
| 2.6326 | 48310 | 5.5882 | - |
| 2.6331 | 48320 | 5.5233 | - |
| 2.6336 | 48330 | 5.3405 | - |
| 2.6342 | 48340 | 5.6017 | - |
| 2.6347 | 48350 | 5.643 | - |
| 2.6353 | 48360 | 5.7573 | - |
| 2.6358 | 48370 | 5.3449 | - |
| 2.6364 | 48380 | 5.421 | - |
| 2.6369 | 48390 | 5.9119 | - |
| 2.6375 | 48400 | 5.6119 | - |
| 2.6380 | 48410 | 5.6363 | - |
| 2.6385 | 48420 | 5.2883 | - |
| 2.6391 | 48430 | 5.5707 | - |
| 2.6396 | 48440 | 5.7741 | - |
| 2.6402 | 48450 | 5.5985 | - |
| 2.6407 | 48460 | 5.6246 | - |
| 2.6413 | 48470 | 5.7094 | - |
| 2.6418 | 48480 | 5.5406 | - |
| 2.6424 | 48490 | 5.4156 | - |
| 2.6429 | 48500 | 5.404 | - |
| 2.6435 | 48510 | 5.6469 | - |
| 2.6440 | 48520 | 5.4934 | - |
| 2.6445 | 48530 | 5.2972 | - |
| 2.6451 | 48540 | 5.5997 | - |
| 2.6456 | 48550 | 5.5843 | - |
| 2.6462 | 48560 | 5.7126 | - |
| 2.6467 | 48570 | 5.6747 | - |
| 2.6473 | 48580 | 5.5568 | - |
| 2.6478 | 48590 | 5.5538 | - |
| 2.6484 | 48600 | 5.5191 | - |
| 2.6489 | 48610 | 5.6302 | - |
| 2.6494 | 48620 | 5.7081 | - |
| 2.6500 | 48630 | 5.5797 | - |
| 2.6505 | 48640 | 5.537 | - |
| 2.6511 | 48650 | 5.6238 | - |
| 2.6516 | 48660 | 5.5551 | - |
| 2.6522 | 48670 | 5.6348 | - |
| 2.6527 | 48680 | 5.3456 | - |
| 2.6533 | 48690 | 5.5937 | - |
| 2.6538 | 48700 | 5.4829 | - |
| 2.6544 | 48710 | 5.4967 | - |
| 2.6549 | 48720 | 5.5977 | - |
| 2.6554 | 48730 | 5.7999 | - |
| 2.6560 | 48740 | 5.6644 | - |
| 2.6565 | 48750 | 5.5891 | - |
| 2.6571 | 48760 | 5.6271 | - |
| 2.6576 | 48770 | 5.5906 | - |
| 2.6582 | 48780 | 5.5386 | - |
| 2.6587 | 48790 | 5.5565 | - |
| 2.6593 | 48800 | 5.6664 | - |
| 2.6598 | 48810 | 5.787 | - |
| 2.6603 | 48820 | 5.552 | - |
| 2.6609 | 48830 | 5.3583 | - |
| 2.6614 | 48840 | 5.692 | - |
| 2.6620 | 48850 | 5.4245 | - |
| 2.6625 | 48860 | 5.4309 | - |
| 2.6631 | 48870 | 5.7604 | - |
| 2.6636 | 48880 | 5.5048 | - |
| 2.6642 | 48890 | 5.5949 | - |
| 2.6647 | 48900 | 5.485 | - |
| 2.6652 | 48910 | 5.6394 | - |
| 2.6658 | 48920 | 5.5587 | - |
| 2.6663 | 48930 | 5.7614 | - |
| 2.6669 | 48940 | 5.7401 | - |
| 2.6674 | 48950 | 5.6884 | - |
| 2.6680 | 48960 | 5.6878 | - |
| 2.6685 | 48970 | 5.7521 | - |
| 2.6691 | 48980 | 5.437 | - |
| 2.6696 | 48990 | 5.5044 | - |
| 2.6702 | 49000 | 5.4885 | 0.6808 |
| 2.6707 | 49010 | 5.6974 | - |
| 2.6712 | 49020 | 5.7507 | - |
| 2.6718 | 49030 | 5.5766 | - |
| 2.6723 | 49040 | 5.6451 | - |
| 2.6729 | 49050 | 5.5092 | - |
| 2.6734 | 49060 | 5.6782 | - |
| 2.6740 | 49070 | 5.4883 | - |
| 2.6745 | 49080 | 5.7252 | - |
| 2.6751 | 49090 | 5.4358 | - |
| 2.6756 | 49100 | 5.4373 | - |
| 2.6761 | 49110 | 5.6446 | - |
| 2.6767 | 49120 | 5.56 | - |
| 2.6772 | 49130 | 5.5149 | - |
| 2.6778 | 49140 | 5.7119 | - |
| 2.6783 | 49150 | 5.6186 | - |
| 2.6789 | 49160 | 5.56 | - |
| 2.6794 | 49170 | 5.5946 | - |
| 2.6800 | 49180 | 5.8671 | - |
| 2.6805 | 49190 | 5.5451 | - |
| 2.6811 | 49200 | 5.4079 | - |
| 2.6816 | 49210 | 5.4411 | - |
| 2.6821 | 49220 | 5.5485 | - |
| 2.6827 | 49230 | 5.6005 | - |
| 2.6832 | 49240 | 5.8064 | - |
| 2.6838 | 49250 | 5.7566 | - |
| 2.6843 | 49260 | 5.6975 | - |
| 2.6849 | 49270 | 5.7814 | - |
| 2.6854 | 49280 | 5.5241 | - |
| 2.6860 | 49290 | 5.678 | - |
| 2.6865 | 49300 | 5.5822 | - |
| 2.6870 | 49310 | 5.5472 | - |
| 2.6876 | 49320 | 5.5624 | - |
| 2.6881 | 49330 | 5.4502 | - |
| 2.6887 | 49340 | 5.5028 | - |
| 2.6892 | 49350 | 5.5364 | - |
| 2.6898 | 49360 | 5.6063 | - |
| 2.6903 | 49370 | 5.9041 | - |
| 2.6909 | 49380 | 5.6373 | - |
| 2.6914 | 49390 | 5.5113 | - |
| 2.6920 | 49400 | 5.6212 | - |
| 2.6925 | 49410 | 5.576 | - |
| 2.6930 | 49420 | 5.6138 | - |
| 2.6936 | 49430 | 5.5737 | - |
| 2.6941 | 49440 | 5.75 | - |
| 2.6947 | 49450 | 5.4342 | - |
| 2.6952 | 49460 | 5.4097 | - |
| 2.6958 | 49470 | 5.8608 | - |
| 2.6963 | 49480 | 5.6249 | - |
| 2.6969 | 49490 | 5.5832 | - |
| 2.6974 | 49500 | 5.6179 | - |
| 2.6979 | 49510 | 5.8369 | - |
| 2.6985 | 49520 | 5.6402 | - |
| 2.6990 | 49530 | 5.6279 | - |
| 2.6996 | 49540 | 5.4436 | - |
| 2.7001 | 49550 | 5.7206 | - |
| 2.7007 | 49560 | 5.8815 | - |
| 2.7012 | 49570 | 5.4575 | - |
| 2.7018 | 49580 | 5.4473 | - |
| 2.7023 | 49590 | 5.4831 | - |
| 2.7028 | 49600 | 5.4653 | - |
| 2.7034 | 49610 | 5.6247 | - |
| 2.7039 | 49620 | 5.4252 | - |
| 2.7045 | 49630 | 5.7461 | - |
| 2.7050 | 49640 | 5.7167 | - |
| 2.7056 | 49650 | 5.6503 | - |
| 2.7061 | 49660 | 5.5346 | - |
| 2.7067 | 49670 | 5.7171 | - |
| 2.7072 | 49680 | 5.6677 | - |
| 2.7078 | 49690 | 5.7121 | - |
| 2.7083 | 49700 | 5.5537 | - |
| 2.7088 | 49710 | 5.6729 | - |
| 2.7094 | 49720 | 5.5747 | - |
| 2.7099 | 49730 | 5.619 | - |
| 2.7105 | 49740 | 5.5109 | - |
| 2.7110 | 49750 | 5.6346 | - |
| 2.7116 | 49760 | 5.7094 | - |
| 2.7121 | 49770 | 5.7552 | - |
| 2.7127 | 49780 | 5.5437 | - |
| 2.7132 | 49790 | 5.5378 | - |
| 2.7137 | 49800 | 5.6495 | - |
| 2.7143 | 49810 | 5.4955 | - |
| 2.7148 | 49820 | 5.6546 | - |
| 2.7154 | 49830 | 5.5732 | - |
| 2.7159 | 49840 | 5.5384 | - |
| 2.7165 | 49850 | 5.7355 | - |
| 2.7170 | 49860 | 5.6288 | - |
| 2.7176 | 49870 | 5.8195 | - |
| 2.7181 | 49880 | 5.7708 | - |
| 2.7187 | 49890 | 5.9362 | - |
| 2.7192 | 49900 | 5.6585 | - |
| 2.7197 | 49910 | 5.5073 | - |
| 2.7203 | 49920 | 5.6055 | - |
| 2.7208 | 49930 | 5.6342 | - |
| 2.7214 | 49940 | 5.3611 | - |
| 2.7219 | 49950 | 5.4627 | - |
| 2.7225 | 49960 | 5.5731 | - |
| 2.7230 | 49970 | 5.6879 | - |
| 2.7236 | 49980 | 5.7255 | - |
| 2.7241 | 49990 | 5.5255 | - |
| 2.7246 | 50000 | 5.5063 | 0.6802 |
| 2.7252 | 50010 | 5.602 | - |
| 2.7257 | 50020 | 5.3199 | - |
| 2.7263 | 50030 | 5.6019 | - |
| 2.7268 | 50040 | 5.7375 | - |
| 2.7274 | 50050 | 5.3671 | - |
| 2.7279 | 50060 | 5.2785 | - |
| 2.7285 | 50070 | 5.6127 | - |
| 2.7290 | 50080 | 5.6478 | - |
| 2.7296 | 50090 | 5.4172 | - |
| 2.7301 | 50100 | 5.8388 | - |
| 2.7306 | 50110 | 5.4616 | - |
| 2.7312 | 50120 | 5.6028 | - |
| 2.7317 | 50130 | 5.5027 | - |
| 2.7323 | 50140 | 5.4038 | - |
| 2.7328 | 50150 | 5.6853 | - |
| 2.7334 | 50160 | 5.2775 | - |
| 2.7339 | 50170 | 5.5976 | - |
| 2.7345 | 50180 | 5.7703 | - |
| 2.7350 | 50190 | 5.7367 | - |
| 2.7355 | 50200 | 5.6275 | - |
| 2.7361 | 50210 | 5.9166 | - |
| 2.7366 | 50220 | 5.4866 | - |
| 2.7372 | 50230 | 5.7468 | - |
| 2.7377 | 50240 | 5.4161 | - |
| 2.7383 | 50250 | 5.4027 | - |
| 2.7388 | 50260 | 5.7063 | - |
| 2.7394 | 50270 | 5.5963 | - |
| 2.7399 | 50280 | 5.5992 | - |
| 2.7405 | 50290 | 5.4163 | - |
| 2.7410 | 50300 | 5.7677 | - |
| 2.7415 | 50310 | 5.597 | - |
| 2.7421 | 50320 | 5.4861 | - |
| 2.7426 | 50330 | 5.6785 | - |
| 2.7432 | 50340 | 5.6354 | - |
| 2.7437 | 50350 | 5.3735 | - |
| 2.7443 | 50360 | 5.5975 | - |
| 2.7448 | 50370 | 5.5671 | - |
| 2.7454 | 50380 | 5.5625 | - |
| 2.7459 | 50390 | 5.5503 | - |
| 2.7464 | 50400 | 5.5078 | - |
| 2.7470 | 50410 | 5.779 | - |
| 2.7475 | 50420 | 5.6493 | - |
| 2.7481 | 50430 | 5.6269 | - |
| 2.7486 | 50440 | 5.7183 | - |
| 2.7492 | 50450 | 5.7393 | - |
| 2.7497 | 50460 | 5.6086 | - |
| 2.7503 | 50470 | 5.7005 | - |
| 2.7508 | 50480 | 5.3796 | - |
| 2.7513 | 50490 | 5.4175 | - |
| 2.7519 | 50500 | 5.3638 | - |
| 2.7524 | 50510 | 5.7983 | - |
| 2.7530 | 50520 | 5.6381 | - |
| 2.7535 | 50530 | 5.4766 | - |
| 2.7541 | 50540 | 5.7818 | - |
| 2.7546 | 50550 | 5.7056 | - |
| 2.7552 | 50560 | 5.688 | - |
| 2.7557 | 50570 | 5.7118 | - |
| 2.7563 | 50580 | 5.5873 | - |
| 2.7568 | 50590 | 5.6521 | - |
| 2.7573 | 50600 | 5.4076 | - |
| 2.7579 | 50610 | 5.4919 | - |
| 2.7584 | 50620 | 5.5197 | - |
| 2.7590 | 50630 | 5.6284 | - |
| 2.7595 | 50640 | 5.4017 | - |
| 2.7601 | 50650 | 5.762 | - |
| 2.7606 | 50660 | 5.4361 | - |
| 2.7612 | 50670 | 5.6028 | - |
| 2.7617 | 50680 | 5.5351 | - |
| 2.7622 | 50690 | 5.4784 | - |
| 2.7628 | 50700 | 5.633 | - |
| 2.7633 | 50710 | 5.2735 | - |
| 2.7639 | 50720 | 5.6161 | - |
| 2.7644 | 50730 | 5.6352 | - |
| 2.7650 | 50740 | 5.4997 | - |
| 2.7655 | 50750 | 5.5415 | - |
| 2.7661 | 50760 | 5.6177 | - |
| 2.7666 | 50770 | 5.5221 | - |
| 2.7672 | 50780 | 5.422 | - |
| 2.7677 | 50790 | 5.6059 | - |
| 2.7682 | 50800 | 5.4092 | - |
| 2.7688 | 50810 | 5.7831 | - |
| 2.7693 | 50820 | 5.6584 | - |
| 2.7699 | 50830 | 5.7514 | - |
| 2.7704 | 50840 | 5.645 | - |
| 2.7710 | 50850 | 5.4211 | - |
| 2.7715 | 50860 | 5.5963 | - |
| 2.7721 | 50870 | 5.5181 | - |
| 2.7726 | 50880 | 5.6539 | - |
| 2.7731 | 50890 | 5.2796 | - |
| 2.7737 | 50900 | 5.4092 | - |
| 2.7742 | 50910 | 5.5826 | - |
| 2.7748 | 50920 | 5.5177 | - |
| 2.7753 | 50930 | 5.5088 | - |
| 2.7759 | 50940 | 5.6614 | - |
| 2.7764 | 50950 | 5.533 | - |
| 2.7770 | 50960 | 5.5715 | - |
| 2.7775 | 50970 | 5.2038 | - |
| 2.7781 | 50980 | 5.4661 | - |
| 2.7786 | 50990 | 5.6551 | - |
| 2.7791 | 51000 | 5.4961 | 0.6793 |
| 2.7797 | 51010 | 5.5961 | - |
| 2.7802 | 51020 | 5.6757 | - |
| 2.7808 | 51030 | 5.3005 | - |
| 2.7813 | 51040 | 5.6149 | - |
| 2.7819 | 51050 | 5.5852 | - |
| 2.7824 | 51060 | 5.5617 | - |
| 2.7830 | 51070 | 5.6794 | - |
| 2.7835 | 51080 | 5.6867 | - |
| 2.7840 | 51090 | 5.3938 | - |
| 2.7846 | 51100 | 5.5146 | - |
| 2.7851 | 51110 | 5.5367 | - |
| 2.7857 | 51120 | 5.7206 | - |
| 2.7862 | 51130 | 5.4754 | - |
| 2.7868 | 51140 | 5.5047 | - |
| 2.7873 | 51150 | 5.5182 | - |
| 2.7879 | 51160 | 5.5093 | - |
| 2.7884 | 51170 | 5.3904 | - |
| 2.7889 | 51180 | 5.6597 | - |
| 2.7895 | 51190 | 5.8068 | - |
| 2.7900 | 51200 | 5.6132 | - |
| 2.7906 | 51210 | 5.5008 | - |
| 2.7911 | 51220 | 5.8325 | - |
| 2.7917 | 51230 | 5.6931 | - |
| 2.7922 | 51240 | 5.5414 | - |
| 2.7928 | 51250 | 5.5938 | - |
| 2.7933 | 51260 | 5.47 | - |
| 2.7939 | 51270 | 5.5815 | - |
| 2.7944 | 51280 | 5.3133 | - |
| 2.7949 | 51290 | 5.5656 | - |
| 2.7955 | 51300 | 5.4651 | - |
| 2.7960 | 51310 | 5.2408 | - |
| 2.7966 | 51320 | 5.6426 | - |
| 2.7971 | 51330 | 5.3581 | - |
| 2.7977 | 51340 | 5.5892 | - |
| 2.7982 | 51350 | 5.6573 | - |
| 2.7988 | 51360 | 5.8982 | - |
| 2.7993 | 51370 | 5.7905 | - |
| 2.7998 | 51380 | 5.5058 | - |
| 2.8004 | 51390 | 5.5767 | - |
| 2.8009 | 51400 | 5.8957 | - |
| 2.8015 | 51410 | 5.6169 | - |
| 2.8020 | 51420 | 5.7187 | - |
| 2.8026 | 51430 | 5.5747 | - |
| 2.8031 | 51440 | 5.4847 | - |
| 2.8037 | 51450 | 5.6148 | - |
| 2.8042 | 51460 | 5.5977 | - |
| 2.8048 | 51470 | 5.6254 | - |
| 2.8053 | 51480 | 5.4435 | - |
| 2.8058 | 51490 | 5.664 | - |
| 2.8064 | 51500 | 5.5579 | - |
| 2.8069 | 51510 | 5.5868 | - |
| 2.8075 | 51520 | 5.4853 | - |
| 2.8080 | 51530 | 5.2575 | - |
| 2.8086 | 51540 | 5.5875 | - |
| 2.8091 | 51550 | 5.7268 | - |
| 2.8097 | 51560 | 5.5028 | - |
| 2.8102 | 51570 | 5.5252 | - |
| 2.8107 | 51580 | 5.6259 | - |
| 2.8113 | 51590 | 5.7586 | - |
| 2.8118 | 51600 | 5.4726 | - |
| 2.8124 | 51610 | 5.7395 | - |
| 2.8129 | 51620 | 5.8081 | - |
| 2.8135 | 51630 | 5.7112 | - |
| 2.8140 | 51640 | 5.5666 | - |
| 2.8146 | 51650 | 5.8554 | - |
| 2.8151 | 51660 | 5.594 | - |
| 2.8157 | 51670 | 5.3511 | - |
| 2.8162 | 51680 | 5.5896 | - |
| 2.8167 | 51690 | 5.5693 | - |
| 2.8173 | 51700 | 5.6954 | - |
| 2.8178 | 51710 | 5.4631 | - |
| 2.8184 | 51720 | 5.6467 | - |
| 2.8189 | 51730 | 5.4903 | - |
| 2.8195 | 51740 | 5.43 | - |
| 2.8200 | 51750 | 5.4481 | - |
| 2.8206 | 51760 | 5.4722 | - |
| 2.8211 | 51770 | 5.9013 | - |
| 2.8216 | 51780 | 5.4764 | - |
| 2.8222 | 51790 | 5.657 | - |
| 2.8227 | 51800 | 5.6582 | - |
| 2.8233 | 51810 | 5.6286 | - |
| 2.8238 | 51820 | 5.6302 | - |
| 2.8244 | 51830 | 5.5555 | - |
| 2.8249 | 51840 | 5.749 | - |
| 2.8255 | 51850 | 5.5893 | - |
| 2.8260 | 51860 | 5.4554 | - |
| 2.8265 | 51870 | 5.6696 | - |
| 2.8271 | 51880 | 5.6425 | - |
| 2.8276 | 51890 | 5.348 | - |
| 2.8282 | 51900 | 5.7585 | - |
| 2.8287 | 51910 | 5.5065 | - |
| 2.8293 | 51920 | 5.5765 | - |
| 2.8298 | 51930 | 5.7702 | - |
| 2.8304 | 51940 | 5.635 | - |
| 2.8309 | 51950 | 5.5945 | - |
| 2.8315 | 51960 | 5.5393 | - |
| 2.8320 | 51970 | 5.3896 | - |
| 2.8325 | 51980 | 5.5838 | - |
| 2.8331 | 51990 | 5.3242 | - |
| 2.8336 | 52000 | 5.6083 | 0.6805 |
| 2.8342 | 52010 | 5.4936 | - |
| 2.8347 | 52020 | 5.6309 | - |
| 2.8353 | 52030 | 5.5087 | - |
| 2.8358 | 52040 | 5.7317 | - |
| 2.8364 | 52050 | 5.5871 | - |
| 2.8369 | 52060 | 5.5571 | - |
| 2.8374 | 52070 | 5.6675 | - |
| 2.8380 | 52080 | 5.4032 | - |
| 2.8385 | 52090 | 5.4155 | - |
| 2.8391 | 52100 | 5.5497 | - |
| 2.8396 | 52110 | 5.6392 | - |
| 2.8402 | 52120 | 5.6504 | - |
| 2.8407 | 52130 | 5.8148 | - |
| 2.8413 | 52140 | 5.7348 | - |
| 2.8418 | 52150 | 5.6421 | - |
| 2.8424 | 52160 | 5.5171 | - |
| 2.8429 | 52170 | 5.6149 | - |
| 2.8434 | 52180 | 5.5876 | - |
| 2.8440 | 52190 | 5.7409 | - |
| 2.8445 | 52200 | 5.8465 | - |
| 2.8451 | 52210 | 5.5615 | - |
| 2.8456 | 52220 | 5.6057 | - |
| 2.8462 | 52230 | 5.57 | - |
| 2.8467 | 52240 | 5.4778 | - |
| 2.8473 | 52250 | 5.5094 | - |
| 2.8478 | 52260 | 5.7219 | - |
| 2.8483 | 52270 | 5.4942 | - |
| 2.8489 | 52280 | 5.5147 | - |
| 2.8494 | 52290 | 5.6946 | - |
| 2.8500 | 52300 | 5.5748 | - |
| 2.8505 | 52310 | 5.4842 | - |
| 2.8511 | 52320 | 5.6062 | - |
| 2.8516 | 52330 | 5.3538 | - |
| 2.8522 | 52340 | 5.3474 | - |
| 2.8527 | 52350 | 5.6965 | - |
| 2.8533 | 52360 | 5.7306 | - |
| 2.8538 | 52370 | 5.7544 | - |
| 2.8543 | 52380 | 5.4435 | - |
| 2.8549 | 52390 | 5.5702 | - |
| 2.8554 | 52400 | 5.743 | - |
| 2.8560 | 52410 | 5.5308 | - |
| 2.8565 | 52420 | 5.5029 | - |
| 2.8571 | 52430 | 5.5017 | - |
| 2.8576 | 52440 | 5.7163 | - |
| 2.8582 | 52450 | 5.6224 | - |
| 2.8587 | 52460 | 5.7937 | - |
| 2.8592 | 52470 | 5.4646 | - |
| 2.8598 | 52480 | 5.498 | - |
| 2.8603 | 52490 | 5.4599 | - |
| 2.8609 | 52500 | 5.5736 | - |
| 2.8614 | 52510 | 5.8178 | - |
| 2.8620 | 52520 | 5.5245 | - |
| 2.8625 | 52530 | 5.7606 | - |
| 2.8631 | 52540 | 5.4411 | - |
| 2.8636 | 52550 | 5.6559 | - |
| 2.8641 | 52560 | 5.5477 | - |
| 2.8647 | 52570 | 5.8549 | - |
| 2.8652 | 52580 | 5.5613 | - |
| 2.8658 | 52590 | 5.6457 | - |
| 2.8663 | 52600 | 5.6293 | - |
| 2.8669 | 52610 | 5.4604 | - |
| 2.8674 | 52620 | 5.7045 | - |
| 2.8680 | 52630 | 5.5714 | - |
| 2.8685 | 52640 | 5.3648 | - |
| 2.8691 | 52650 | 5.69 | - |
| 2.8696 | 52660 | 5.34 | - |
| 2.8701 | 52670 | 5.5923 | - |
| 2.8707 | 52680 | 5.5755 | - |
| 2.8712 | 52690 | 5.5872 | - |
| 2.8718 | 52700 | 5.2809 | - |
| 2.8723 | 52710 | 5.526 | - |
| 2.8729 | 52720 | 5.6748 | - |
| 2.8734 | 52730 | 5.2735 | - |
| 2.8740 | 52740 | 5.5914 | - |
| 2.8745 | 52750 | 5.5986 | - |
| 2.8750 | 52760 | 5.7422 | - |
| 2.8756 | 52770 | 5.6246 | - |
| 2.8761 | 52780 | 5.408 | - |
| 2.8767 | 52790 | 5.595 | - |
| 2.8772 | 52800 | 5.5321 | - |
| 2.8778 | 52810 | 5.607 | - |
| 2.8783 | 52820 | 5.6873 | - |
| 2.8789 | 52830 | 5.5507 | - |
| 2.8794 | 52840 | 5.5249 | - |
| 2.8800 | 52850 | 5.4721 | - |
| 2.8805 | 52860 | 5.6787 | - |
| 2.8810 | 52870 | 5.4427 | - |
| 2.8816 | 52880 | 5.6331 | - |
| 2.8821 | 52890 | 5.6982 | - |
| 2.8827 | 52900 | 5.3201 | - |
| 2.8832 | 52910 | 5.5286 | - |
| 2.8838 | 52920 | 5.6086 | - |
| 2.8843 | 52930 | 5.789 | - |
| 2.8849 | 52940 | 5.6753 | - |
| 2.8854 | 52950 | 5.5422 | - |
| 2.8859 | 52960 | 5.3623 | - |
| 2.8865 | 52970 | 5.5018 | - |
| 2.8870 | 52980 | 5.6099 | - |
| 2.8876 | 52990 | 5.7974 | - |
| 2.8881 | 53000 | 5.7056 | 0.6817 |
| 2.8887 | 53010 | 5.6461 | - |
| 2.8892 | 53020 | 5.468 | - |
| 2.8898 | 53030 | 5.7035 | - |
| 2.8903 | 53040 | 5.4613 | - |
| 2.8909 | 53050 | 5.5115 | - |
| 2.8914 | 53060 | 5.6231 | - |
| 2.8919 | 53070 | 5.3764 | - |
| 2.8925 | 53080 | 5.4412 | - |
| 2.8930 | 53090 | 5.4827 | - |
| 2.8936 | 53100 | 5.4412 | - |
| 2.8941 | 53110 | 5.5355 | - |
| 2.8947 | 53120 | 5.5685 | - |
| 2.8952 | 53130 | 5.665 | - |
| 2.8958 | 53140 | 5.5051 | - |
| 2.8963 | 53150 | 5.6201 | - |
| 2.8968 | 53160 | 5.6325 | - |
| 2.8974 | 53170 | 5.4208 | - |
| 2.8979 | 53180 | 5.4396 | - |
| 2.8985 | 53190 | 5.6203 | - |
| 2.8990 | 53200 | 5.5872 | - |
| 2.8996 | 53210 | 5.3552 | - |
| 2.9001 | 53220 | 5.6779 | - |
| 2.9007 | 53230 | 5.5869 | - |
| 2.9012 | 53240 | 5.7247 | - |
| 2.9017 | 53250 | 5.4455 | - |
| 2.9023 | 53260 | 5.6445 | - |
| 2.9028 | 53270 | 5.7081 | - |
| 2.9034 | 53280 | 5.5055 | - |
| 2.9039 | 53290 | 5.7847 | - |
| 2.9045 | 53300 | 5.719 | - |
| 2.9050 | 53310 | 5.8704 | - |
| 2.9056 | 53320 | 5.7039 | - |
| 2.9061 | 53330 | 5.833 | - |
| 2.9067 | 53340 | 5.491 | - |
| 2.9072 | 53350 | 5.6474 | - |
| 2.9077 | 53360 | 5.6822 | - |
| 2.9083 | 53370 | 5.459 | - |
| 2.9088 | 53380 | 5.7197 | - |
| 2.9094 | 53390 | 5.528 | - |
| 2.9099 | 53400 | 5.6483 | - |
| 2.9105 | 53410 | 5.4596 | - |
| 2.9110 | 53420 | 5.4936 | - |
| 2.9116 | 53430 | 5.7413 | - |
| 2.9121 | 53440 | 5.714 | - |
| 2.9126 | 53450 | 5.7028 | - |
| 2.9132 | 53460 | 5.3246 | - |
| 2.9137 | 53470 | 5.5292 | - |
| 2.9143 | 53480 | 5.6698 | - |
| 2.9148 | 53490 | 5.6745 | - |
| 2.9154 | 53500 | 5.557 | - |
| 2.9159 | 53510 | 5.5883 | - |
| 2.9165 | 53520 | 5.5046 | - |
| 2.9170 | 53530 | 5.6652 | - |
| 2.9176 | 53540 | 5.5935 | - |
| 2.9181 | 53550 | 5.6606 | - |
| 2.9186 | 53560 | 5.4939 | - |
| 2.9192 | 53570 | 5.6351 | - |
| 2.9197 | 53580 | 5.477 | - |
| 2.9203 | 53590 | 5.3632 | - |
| 2.9208 | 53600 | 5.4977 | - |
| 2.9214 | 53610 | 5.3788 | - |
| 2.9219 | 53620 | 5.641 | - |
| 2.9225 | 53630 | 5.6346 | - |
| 2.9230 | 53640 | 5.6158 | - |
| 2.9235 | 53650 | 5.5546 | - |
| 2.9241 | 53660 | 5.7029 | - |
| 2.9246 | 53670 | 5.5487 | - |
| 2.9252 | 53680 | 5.5897 | - |
| 2.9257 | 53690 | 5.6987 | - |
| 2.9263 | 53700 | 5.55 | - |
| 2.9268 | 53710 | 5.4887 | - |
| 2.9274 | 53720 | 5.4493 | - |
| 2.9279 | 53730 | 5.4898 | - |
| 2.9285 | 53740 | 5.5008 | - |
| 2.9290 | 53750 | 5.604 | - |
| 2.9295 | 53760 | 5.4701 | - |
| 2.9301 | 53770 | 5.7366 | - |
| 2.9306 | 53780 | 5.5862 | - |
| 2.9312 | 53790 | 5.5758 | - |
| 2.9317 | 53800 | 5.7022 | - |
| 2.9323 | 53810 | 5.9589 | - |
| 2.9328 | 53820 | 5.5826 | - |
| 2.9334 | 53830 | 5.4193 | - |
| 2.9339 | 53840 | 5.5036 | - |
| 2.9344 | 53850 | 5.5074 | - |
| 2.9350 | 53860 | 5.8808 | - |
| 2.9355 | 53870 | 5.6421 | - |
| 2.9361 | 53880 | 5.7482 | - |
| 2.9366 | 53890 | 5.5955 | - |
| 2.9372 | 53900 | 5.6103 | - |
| 2.9377 | 53910 | 5.625 | - |
| 2.9383 | 53920 | 5.4818 | - |
| 2.9388 | 53930 | 5.4601 | - |
| 2.9393 | 53940 | 5.6007 | - |
| 2.9399 | 53950 | 5.7778 | - |
| 2.9404 | 53960 | 5.3172 | - |
| 2.9410 | 53970 | 5.4216 | - |
| 2.9415 | 53980 | 5.4236 | - |
| 2.9421 | 53990 | 5.4378 | - |
| 2.9426 | 54000 | 5.5305 | 0.6821 |
| 2.9432 | 54010 | 5.897 | - |
| 2.9437 | 54020 | 5.3386 | - |
| 2.9443 | 54030 | 5.3862 | - |
| 2.9448 | 54040 | 5.6761 | - |
| 2.9453 | 54050 | 5.5551 | - |
| 2.9459 | 54060 | 5.8509 | - |
| 2.9464 | 54070 | 5.6382 | - |
| 2.9470 | 54080 | 5.5494 | - |
| 2.9475 | 54090 | 5.3132 | - |
| 2.9481 | 54100 | 5.7431 | - |
| 2.9486 | 54110 | 5.4777 | - |
| 2.9492 | 54120 | 5.8205 | - |
| 2.9497 | 54130 | 5.5702 | - |
| 2.9502 | 54140 | 5.5749 | - |
| 2.9508 | 54150 | 5.7768 | - |
| 2.9513 | 54160 | 5.6337 | - |
| 2.9519 | 54170 | 5.6445 | - |
| 2.9524 | 54180 | 5.5242 | - |
| 2.9530 | 54190 | 5.6038 | - |
| 2.9535 | 54200 | 5.4655 | - |
| 2.9541 | 54210 | 5.7553 | - |
| 2.9546 | 54220 | 5.6806 | - |
| 2.9552 | 54230 | 5.6882 | - |
| 2.9557 | 54240 | 5.7861 | - |
| 2.9562 | 54250 | 5.576 | - |
| 2.9568 | 54260 | 5.5294 | - |
| 2.9573 | 54270 | 5.6777 | - |
| 2.9579 | 54280 | 5.4977 | - |
| 2.9584 | 54290 | 5.5806 | - |
| 2.9590 | 54300 | 5.5158 | - |
| 2.9595 | 54310 | 5.4109 | - |
| 2.9601 | 54320 | 5.6239 | - |
| 2.9606 | 54330 | 5.5118 | - |
| 2.9611 | 54340 | 5.6751 | - |
| 2.9617 | 54350 | 5.6862 | - |
| 2.9622 | 54360 | 5.736 | - |
| 2.9628 | 54370 | 5.4573 | - |
| 2.9633 | 54380 | 5.7203 | - |
| 2.9639 | 54390 | 5.7557 | - |
| 2.9644 | 54400 | 5.5273 | - |
| 2.9650 | 54410 | 5.6985 | - |
| 2.9655 | 54420 | 5.4401 | - |
| 2.9661 | 54430 | 5.6462 | - |
| 2.9666 | 54440 | 5.6004 | - |
| 2.9671 | 54450 | 5.5151 | - |
| 2.9677 | 54460 | 5.7772 | - |
| 2.9682 | 54470 | 5.6286 | - |
| 2.9688 | 54480 | 5.6102 | - |
| 2.9693 | 54490 | 5.7993 | - |
| 2.9699 | 54500 | 5.697 | - |
| 2.9704 | 54510 | 5.7695 | - |
| 2.9710 | 54520 | 5.6035 | - |
| 2.9715 | 54530 | 5.6179 | - |
| 2.9720 | 54540 | 5.6055 | - |
| 2.9726 | 54550 | 5.399 | - |
| 2.9731 | 54560 | 5.5521 | - |
| 2.9737 | 54570 | 5.5244 | - |
| 2.9742 | 54580 | 5.4442 | - |
| 2.9748 | 54590 | 5.7315 | - |
| 2.9753 | 54600 | 5.6499 | - |
| 2.9759 | 54610 | 5.5721 | - |
| 2.9764 | 54620 | 5.3681 | - |
| 2.9769 | 54630 | 5.713 | - |
| 2.9775 | 54640 | 5.7453 | - |
| 2.9780 | 54650 | 5.6552 | - |
| 2.9786 | 54660 | 5.6957 | - |
| 2.9791 | 54670 | 5.5794 | - |
| 2.9797 | 54680 | 5.7587 | - |
| 2.9802 | 54690 | 5.48 | - |
| 2.9808 | 54700 | 5.7066 | - |
| 2.9813 | 54710 | 5.5667 | - |
| 2.9819 | 54720 | 5.377 | - |
| 2.9824 | 54730 | 5.6398 | - |
| 2.9829 | 54740 | 5.4893 | - |
| 2.9835 | 54750 | 5.5184 | - |
| 2.9840 | 54760 | 5.6245 | - |
| 2.9846 | 54770 | 5.2898 | - |
| 2.9851 | 54780 | 5.8113 | - |
| 2.9857 | 54790 | 5.5647 | - |
| 2.9862 | 54800 | 5.6918 | - |
| 2.9868 | 54810 | 5.3488 | - |
| 2.9873 | 54820 | 5.5563 | - |
| 2.9878 | 54830 | 5.2022 | - |
| 2.9884 | 54840 | 5.2451 | - |
| 2.9889 | 54850 | 5.5584 | - |
| 2.9895 | 54860 | 5.7062 | - |
| 2.9900 | 54870 | 5.3936 | - |
| 2.9906 | 54880 | 5.6486 | - |
| 2.9911 | 54890 | 5.6323 | - |
| 2.9917 | 54900 | 5.6134 | - |
| 2.9922 | 54910 | 5.6039 | - |
| 2.9928 | 54920 | 5.5726 | - |
| 2.9933 | 54930 | 5.4652 | - |
| 2.9938 | 54940 | 5.4766 | - |
| 2.9944 | 54950 | 5.5543 | - |
| 2.9949 | 54960 | 5.593 | - |
| 2.9955 | 54970 | 5.6998 | - |
| 2.9960 | 54980 | 5.6396 | - |
| 2.9966 | 54990 | 5.582 | - |
| 2.9971 | 55000 | 5.7142 | 0.6842 |
| 2.9977 | 55010 | 5.4101 | - |
| 2.9982 | 55020 | 5.6454 | - |
| 2.9987 | 55030 | 5.5434 | - |
| 2.9993 | 55040 | 5.6036 | - |
| 2.9998 | 55050 | 5.4097 | - |
| 3.0004 | 55060 | 5.3583 | - |
| 3.0009 | 55070 | 5.2952 | - |
| 3.0015 | 55080 | 5.3971 | - |
| 3.0020 | 55090 | 5.5311 | - |
| 3.0026 | 55100 | 5.6913 | - |
| 3.0031 | 55110 | 5.5875 | - |
| 3.0037 | 55120 | 5.5599 | - |
| 3.0042 | 55130 | 5.804 | - |
| 3.0047 | 55140 | 5.6496 | - |
| 3.0053 | 55150 | 5.27 | - |
| 3.0058 | 55160 | 5.5523 | - |
| 3.0064 | 55170 | 5.5205 | - |
| 3.0069 | 55180 | 5.6201 | - |
| 3.0075 | 55190 | 5.6143 | - |
| 3.0080 | 55200 | 5.9053 | - |
| 3.0086 | 55210 | 5.8354 | - |
| 3.0091 | 55220 | 5.5616 | - |
| 3.0096 | 55230 | 5.836 | - |
| 3.0102 | 55240 | 5.5796 | - |
| 3.0107 | 55250 | 5.6556 | - |
| 3.0113 | 55260 | 5.6915 | - |
| 3.0118 | 55270 | 5.3751 | - |
| 3.0124 | 55280 | 5.6772 | - |
| 3.0129 | 55290 | 5.7946 | - |
| 3.0135 | 55300 | 5.6745 | - |
| 3.0140 | 55310 | 5.4011 | - |
| 3.0145 | 55320 | 5.6492 | - |
| 3.0151 | 55330 | 5.4737 | - |
| 3.0156 | 55340 | 5.429 | - |
| 3.0162 | 55350 | 5.6496 | - |
| 3.0167 | 55360 | 5.569 | - |
| 3.0173 | 55370 | 5.5918 | - |
| 3.0178 | 55380 | 5.6284 | - |
| 3.0184 | 55390 | 5.7057 | - |
| 3.0189 | 55400 | 5.4078 | - |
| 3.0195 | 55410 | 5.8169 | - |
| 3.0200 | 55420 | 5.5257 | - |
| 3.0205 | 55430 | 5.7055 | - |
| 3.0211 | 55440 | 5.6289 | - |
| 3.0216 | 55450 | 5.555 | - |
| 3.0222 | 55460 | 5.5882 | - |
| 3.0227 | 55470 | 5.7336 | - |
| 3.0233 | 55480 | 5.6562 | - |
| 3.0238 | 55490 | 5.4452 | - |
| 3.0244 | 55500 | 5.6725 | - |
| 3.0249 | 55510 | 5.5872 | - |
| 3.0254 | 55520 | 5.4538 | - |
| 3.0260 | 55530 | 5.477 | - |
| 3.0265 | 55540 | 5.5479 | - |
| 3.0271 | 55550 | 5.6758 | - |
| 3.0276 | 55560 | 5.559 | - |
| 3.0282 | 55570 | 5.5864 | - |
| 3.0287 | 55580 | 5.6005 | - |
| 3.0293 | 55590 | 5.5307 | - |
| 3.0298 | 55600 | 5.4761 | - |
| 3.0304 | 55610 | 5.3996 | - |
| 3.0309 | 55620 | 5.3713 | - |
| 3.0314 | 55630 | 5.4874 | - |
| 3.0320 | 55640 | 5.5463 | - |
| 3.0325 | 55650 | 5.5725 | - |
| 3.0331 | 55660 | 5.7445 | - |
| 3.0336 | 55670 | 5.6929 | - |
| 3.0342 | 55680 | 5.5036 | - |
| 3.0347 | 55690 | 5.5561 | - |
| 3.0353 | 55700 | 5.7637 | - |
| 3.0358 | 55710 | 5.6853 | - |
| 3.0363 | 55720 | 5.4626 | - |
| 3.0369 | 55730 | 5.4956 | - |
| 3.0374 | 55740 | 5.6613 | - |
| 3.0380 | 55750 | 5.6244 | - |
| 3.0385 | 55760 | 5.1816 | - |
| 3.0391 | 55770 | 5.5166 | - |
| 3.0396 | 55780 | 5.4245 | - |
| 3.0402 | 55790 | 5.5932 | - |
| 3.0407 | 55800 | 5.6966 | - |
| 3.0413 | 55810 | 5.4808 | - |
| 3.0418 | 55820 | 5.5518 | - |
| 3.0423 | 55830 | 5.5923 | - |
| 3.0429 | 55840 | 5.6764 | - |
| 3.0434 | 55850 | 5.8104 | - |
| 3.0440 | 55860 | 5.9788 | - |
| 3.0445 | 55870 | 5.7056 | - |
| 3.0451 | 55880 | 5.4656 | - |
| 3.0456 | 55890 | 5.4432 | - |
| 3.0462 | 55900 | 5.6034 | - |
| 3.0467 | 55910 | 5.5752 | - |
| 3.0472 | 55920 | 5.713 | - |
| 3.0478 | 55930 | 5.4845 | - |
| 3.0483 | 55940 | 5.4625 | - |
| 3.0489 | 55950 | 5.8045 | - |
| 3.0494 | 55960 | 5.3101 | - |
| 3.0500 | 55970 | 5.5784 | - |
| 3.0505 | 55980 | 5.5667 | - |
| 3.0511 | 55990 | 5.7012 | - |
| 3.0516 | 56000 | 5.5753 | 0.6861 |
| 3.0521 | 56010 | 5.6271 | - |
| 3.0527 | 56020 | 5.5163 | - |
| 3.0532 | 56030 | 5.6268 | - |
| 3.0538 | 56040 | 5.6766 | - |
| 3.0543 | 56050 | 5.597 | - |
| 3.0549 | 56060 | 5.3393 | - |
| 3.0554 | 56070 | 5.7873 | - |
| 3.0560 | 56080 | 5.6872 | - |
| 3.0565 | 56090 | 5.6222 | - |
| 3.0571 | 56100 | 5.6228 | - |
| 3.0576 | 56110 | 5.4617 | - |
| 3.0581 | 56120 | 5.6157 | - |
| 3.0587 | 56130 | 5.7055 | - |
| 3.0592 | 56140 | 5.6636 | - |
| 3.0598 | 56150 | 5.6502 | - |
| 3.0603 | 56160 | 5.2565 | - |
| 3.0609 | 56170 | 5.6517 | - |
| 3.0614 | 56180 | 5.6513 | - |
| 3.0620 | 56190 | 5.6463 | - |
| 3.0625 | 56200 | 5.596 | - |
| 3.0630 | 56210 | 5.5736 | - |
| 3.0636 | 56220 | 5.7163 | - |
| 3.0641 | 56230 | 5.5228 | - |
| 3.0647 | 56240 | 5.6735 | - |
| 3.0652 | 56250 | 5.473 | - |
| 3.0658 | 56260 | 5.6879 | - |
| 3.0663 | 56270 | 5.475 | - |
| 3.0669 | 56280 | 5.745 | - |
| 3.0674 | 56290 | 5.5817 | - |
| 3.0680 | 56300 | 5.6066 | - |
| 3.0685 | 56310 | 5.4293 | - |
| 3.0690 | 56320 | 5.6194 | - |
| 3.0696 | 56330 | 5.6671 | - |
| 3.0701 | 56340 | 5.5364 | - |
| 3.0707 | 56350 | 5.6226 | - |
| 3.0712 | 56360 | 5.5551 | - |
| 3.0718 | 56370 | 5.6126 | - |
| 3.0723 | 56380 | 5.7158 | - |
| 3.0729 | 56390 | 5.2994 | - |
| 3.0734 | 56400 | 5.3322 | - |
| 3.0739 | 56410 | 5.4563 | - |
| 3.0745 | 56420 | 5.5693 | - |
| 3.0750 | 56430 | 5.6385 | - |
| 3.0756 | 56440 | 5.6884 | - |
| 3.0761 | 56450 | 5.4577 | - |
| 3.0767 | 56460 | 5.6272 | - |
| 3.0772 | 56470 | 5.4874 | - |
| 3.0778 | 56480 | 5.5602 | - |
| 3.0783 | 56490 | 5.5704 | - |
| 3.0789 | 56500 | 5.6481 | - |
| 3.0794 | 56510 | 5.4939 | - |
| 3.0799 | 56520 | 5.7675 | - |
| 3.0805 | 56530 | 5.5666 | - |
| 3.0810 | 56540 | 5.6103 | - |
| 3.0816 | 56550 | 5.7937 | - |
| 3.0821 | 56560 | 5.5073 | - |
| 3.0827 | 56570 | 5.6215 | - |
| 3.0832 | 56580 | 5.7401 | - |
| 3.0838 | 56590 | 5.6248 | - |
| 3.0843 | 56600 | 5.3983 | - |
| 3.0848 | 56610 | 5.5149 | - |
| 3.0854 | 56620 | 5.6285 | - |
| 3.0859 | 56630 | 5.4339 | - |
| 3.0865 | 56640 | 5.5622 | - |
| 3.0870 | 56650 | 5.5437 | - |
| 3.0876 | 56660 | 5.6851 | - |
| 3.0881 | 56670 | 5.5328 | - |
| 3.0887 | 56680 | 5.7663 | - |
| 3.0892 | 56690 | 5.79 | - |
| 3.0897 | 56700 | 5.6975 | - |
| 3.0903 | 56710 | 5.6608 | - |
| 3.0908 | 56720 | 5.6717 | - |
| 3.0914 | 56730 | 5.7252 | - |
| 3.0919 | 56740 | 5.6331 | - |
| 3.0925 | 56750 | 5.4945 | - |
| 3.0930 | 56760 | 5.6467 | - |
| 3.0936 | 56770 | 5.6452 | - |
| 3.0941 | 56780 | 5.7271 | - |
| 3.0947 | 56790 | 5.501 | - |
| 3.0952 | 56800 | 5.351 | - |
| 3.0957 | 56810 | 5.556 | - |
| 3.0963 | 56820 | 5.9188 | - |
| 3.0968 | 56830 | 5.4939 | - |
| 3.0974 | 56840 | 5.5489 | - |
| 3.0979 | 56850 | 5.7144 | - |
| 3.0985 | 56860 | 5.4187 | - |
| 3.0990 | 56870 | 5.8872 | - |
| 3.0996 | 56880 | 5.5578 | - |
| 3.1001 | 56890 | 5.6627 | - |
| 3.1006 | 56900 | 5.5523 | - |
| 3.1012 | 56910 | 5.5085 | - |
| 3.1017 | 56920 | 5.625 | - |
| 3.1023 | 56930 | 5.6228 | - |
| 3.1028 | 56940 | 5.7574 | - |
| 3.1034 | 56950 | 5.6896 | - |
| 3.1039 | 56960 | 5.6527 | - |
| 3.1045 | 56970 | 5.6329 | - |
| 3.1050 | 56980 | 5.5351 | - |
| 3.1056 | 56990 | 5.5039 | - |
| 3.1061 | 57000 | 5.5551 | 0.6853 |
| 3.1066 | 57010 | 5.5998 | - |
| 3.1072 | 57020 | 5.2482 | - |
| 3.1077 | 57030 | 5.4847 | - |
| 3.1083 | 57040 | 5.5472 | - |
| 3.1088 | 57050 | 5.6949 | - |
| 3.1094 | 57060 | 5.6161 | - |
| 3.1099 | 57070 | 5.384 | - |
| 3.1105 | 57080 | 5.4942 | - |
| 3.1110 | 57090 | 5.5103 | - |
| 3.1115 | 57100 | 5.4479 | - |
| 3.1121 | 57110 | 5.4793 | - |
| 3.1126 | 57120 | 5.671 | - |
| 3.1132 | 57130 | 5.5017 | - |
| 3.1137 | 57140 | 5.4549 | - |
| 3.1143 | 57150 | 5.3467 | - |
| 3.1148 | 57160 | 5.6319 | - |
| 3.1154 | 57170 | 5.3813 | - |
| 3.1159 | 57180 | 5.6135 | - |
| 3.1165 | 57190 | 5.4474 | - |
| 3.1170 | 57200 | 5.6135 | - |
| 3.1175 | 57210 | 5.787 | - |
| 3.1181 | 57220 | 5.4014 | - |
| 3.1186 | 57230 | 5.4324 | - |
| 3.1192 | 57240 | 5.6683 | - |
| 3.1197 | 57250 | 5.4596 | - |
| 3.1203 | 57260 | 5.7858 | - |
| 3.1208 | 57270 | 5.5289 | - |
| 3.1214 | 57280 | 5.8035 | - |
| 3.1219 | 57290 | 5.5908 | - |
| 3.1224 | 57300 | 5.5614 | - |
| 3.1230 | 57310 | 5.6079 | - |
| 3.1235 | 57320 | 5.5993 | - |
| 3.1241 | 57330 | 5.6513 | - |
| 3.1246 | 57340 | 5.4558 | - |
| 3.1252 | 57350 | 5.6605 | - |
| 3.1257 | 57360 | 5.3564 | - |
| 3.1263 | 57370 | 5.3724 | - |
| 3.1268 | 57380 | 5.4827 | - |
| 3.1274 | 57390 | 5.7043 | - |
| 3.1279 | 57400 | 5.6769 | - |
| 3.1284 | 57410 | 5.629 | - |
| 3.1290 | 57420 | 5.5853 | - |
| 3.1295 | 57430 | 5.4585 | - |
| 3.1301 | 57440 | 5.5265 | - |
| 3.1306 | 57450 | 5.7372 | - |
| 3.1312 | 57460 | 5.4251 | - |
| 3.1317 | 57470 | 5.7587 | - |
| 3.1323 | 57480 | 5.7622 | - |
| 3.1328 | 57490 | 5.3503 | - |
| 3.1333 | 57500 | 5.5668 | - |
| 3.1339 | 57510 | 5.7925 | - |
| 3.1344 | 57520 | 5.5656 | - |
| 3.1350 | 57530 | 5.3515 | - |
| 3.1355 | 57540 | 5.5141 | - |
| 3.1361 | 57550 | 5.5459 | - |
| 3.1366 | 57560 | 5.5267 | - |
| 3.1372 | 57570 | 5.3979 | - |
| 3.1377 | 57580 | 5.6819 | - |
| 3.1382 | 57590 | 5.6676 | - |
| 3.1388 | 57600 | 5.4149 | - |
| 3.1393 | 57610 | 5.4246 | - |
| 3.1399 | 57620 | 5.7314 | - |
| 3.1404 | 57630 | 5.4604 | - |
| 3.1410 | 57640 | 5.4791 | - |
| 3.1415 | 57650 | 5.6239 | - |
| 3.1421 | 57660 | 5.6317 | - |
| 3.1426 | 57670 | 5.4297 | - |
| 3.1432 | 57680 | 5.4567 | - |
| 3.1437 | 57690 | 5.3911 | - |
| 3.1442 | 57700 | 5.0396 | - |
| 3.1448 | 57710 | 5.7586 | - |
| 3.1453 | 57720 | 5.4124 | - |
| 3.1459 | 57730 | 5.3874 | - |
| 3.1464 | 57740 | 5.542 | - |
| 3.1470 | 57750 | 5.5085 | - |
| 3.1475 | 57760 | 5.6589 | - |
| 3.1481 | 57770 | 5.519 | - |
| 3.1486 | 57780 | 5.4556 | - |
| 3.1491 | 57790 | 5.4852 | - |
| 3.1497 | 57800 | 5.4455 | - |
| 3.1502 | 57810 | 5.7718 | - |
| 3.1508 | 57820 | 5.6192 | - |
| 3.1513 | 57830 | 5.4136 | - |
| 3.1519 | 57840 | 5.4781 | - |
| 3.1524 | 57850 | 5.6758 | - |
| 3.1530 | 57860 | 5.5014 | - |
| 3.1535 | 57870 | 5.4511 | - |
| 3.1541 | 57880 | 5.6292 | - |
| 3.1546 | 57890 | 5.6283 | - |
| 3.1551 | 57900 | 5.593 | - |
| 3.1557 | 57910 | 5.8281 | - |
| 3.1562 | 57920 | 5.5885 | - |
| 3.1568 | 57930 | 5.5886 | - |
| 3.1573 | 57940 | 5.4311 | - |
| 3.1579 | 57950 | 5.7678 | - |
| 3.1584 | 57960 | 5.6736 | - |
| 3.1590 | 57970 | 5.593 | - |
| 3.1595 | 57980 | 5.3814 | - |
| 3.1600 | 57990 | 5.5207 | - |
| 3.1606 | 58000 | 5.6562 | 0.6861 |
| 3.1611 | 58010 | 5.5805 | - |
| 3.1617 | 58020 | 5.5125 | - |
| 3.1622 | 58030 | 5.7202 | - |
| 3.1628 | 58040 | 5.6758 | - |
| 3.1633 | 58050 | 5.5474 | - |
| 3.1639 | 58060 | 5.5028 | - |
| 3.1644 | 58070 | 5.4458 | - |
| 3.1650 | 58080 | 5.5807 | - |
| 3.1655 | 58090 | 5.6071 | - |
| 3.1660 | 58100 | 5.4404 | - |
| 3.1666 | 58110 | 5.6287 | - |
| 3.1671 | 58120 | 5.5156 | - |
| 3.1677 | 58130 | 5.762 | - |
| 3.1682 | 58140 | 5.5995 | - |
| 3.1688 | 58150 | 5.6232 | - |
| 3.1693 | 58160 | 5.5056 | - |
| 3.1699 | 58170 | 5.3623 | - |
| 3.1704 | 58180 | 5.7739 | - |
| 3.1709 | 58190 | 5.683 | - |
| 3.1715 | 58200 | 5.4708 | - |
| 3.1720 | 58210 | 5.5302 | - |
| 3.1726 | 58220 | 5.3617 | - |
| 3.1731 | 58230 | 5.5949 | - |
| 3.1737 | 58240 | 5.5941 | - |
| 3.1742 | 58250 | 5.7129 | - |
| 3.1748 | 58260 | 5.508 | - |
| 3.1753 | 58270 | 5.5284 | - |
| 3.1758 | 58280 | 5.6354 | - |
| 3.1764 | 58290 | 5.5601 | - |
| 3.1769 | 58300 | 5.5798 | - |
| 3.1775 | 58310 | 5.4399 | - |
| 3.1780 | 58320 | 5.6234 | - |
| 3.1786 | 58330 | 5.7915 | - |
| 3.1791 | 58340 | 5.5255 | - |
| 3.1797 | 58350 | 5.5135 | - |
| 3.1802 | 58360 | 5.4672 | - |
| 3.1808 | 58370 | 5.6552 | - |
| 3.1813 | 58380 | 5.4875 | - |
| 3.1818 | 58390 | 5.5745 | - |
| 3.1824 | 58400 | 5.6628 | - |
| 3.1829 | 58410 | 5.6895 | - |
| 3.1835 | 58420 | 5.7178 | - |
| 3.1840 | 58430 | 5.3639 | - |
| 3.1846 | 58440 | 5.5627 | - |
| 3.1851 | 58450 | 5.6401 | - |
| 3.1857 | 58460 | 5.4899 | - |
| 3.1862 | 58470 | 5.29 | - |
| 3.1867 | 58480 | 5.726 | - |
| 3.1873 | 58490 | 5.7599 | - |
| 3.1878 | 58500 | 5.5582 | - |
| 3.1884 | 58510 | 5.634 | - |
| 3.1889 | 58520 | 5.5692 | - |
| 3.1895 | 58530 | 5.7085 | - |
| 3.1900 | 58540 | 5.6995 | - |
| 3.1906 | 58550 | 5.4618 | - |
| 3.1911 | 58560 | 5.4561 | - |
| 3.1917 | 58570 | 5.5303 | - |
| 3.1922 | 58580 | 5.8302 | - |
| 3.1927 | 58590 | 5.5 | - |
| 3.1933 | 58600 | 5.7056 | - |
| 3.1938 | 58610 | 5.7139 | - |
| 3.1944 | 58620 | 5.4831 | - |
| 3.1949 | 58630 | 5.5046 | - |
| 3.1955 | 58640 | 5.6135 | - |
| 3.1960 | 58650 | 5.6254 | - |
| 3.1966 | 58660 | 5.6416 | - |
| 3.1971 | 58670 | 5.4216 | - |
| 3.1976 | 58680 | 5.4249 | - |
| 3.1982 | 58690 | 5.4591 | - |
| 3.1987 | 58700 | 5.6812 | - |
| 3.1993 | 58710 | 5.5607 | - |
| 3.1998 | 58720 | 5.5359 | - |
| 3.2004 | 58730 | 5.5256 | - |
| 3.2009 | 58740 | 5.5933 | - |
| 3.2015 | 58750 | 5.6243 | - |
| 3.2020 | 58760 | 5.507 | - |
| 3.2026 | 58770 | 5.7533 | - |
| 3.2031 | 58780 | 5.5273 | - |
| 3.2036 | 58790 | 5.5793 | - |
| 3.2042 | 58800 | 5.7159 | - |
| 3.2047 | 58810 | 5.5331 | - |
| 3.2053 | 58820 | 5.3351 | - |
| 3.2058 | 58830 | 5.625 | - |
| 3.2064 | 58840 | 5.3467 | - |
| 3.2069 | 58850 | 5.8089 | - |
| 3.2075 | 58860 | 5.4686 | - |
| 3.2080 | 58870 | 5.723 | - |
| 3.2085 | 58880 | 5.3693 | - |
| 3.2091 | 58890 | 5.5807 | - |
| 3.2096 | 58900 | 5.4593 | - |
| 3.2102 | 58910 | 5.4486 | - |
| 3.2107 | 58920 | 5.7269 | - |
| 3.2113 | 58930 | 5.3156 | - |
| 3.2118 | 58940 | 5.5429 | - |
| 3.2124 | 58950 | 5.6287 | - |
| 3.2129 | 58960 | 5.5959 | - |
| 3.2134 | 58970 | 5.4247 | - |
| 3.2140 | 58980 | 5.7167 | - |
| 3.2145 | 58990 | 5.4522 | - |
| 3.2151 | 59000 | 5.557 | 0.6856 |
| 3.2156 | 59010 | 5.5889 | - |
| 3.2162 | 59020 | 5.5573 | - |
| 3.2167 | 59030 | 5.5054 | - |
| 3.2173 | 59040 | 5.6513 | - |
| 3.2178 | 59050 | 5.6169 | - |
| 3.2184 | 59060 | 5.5458 | - |
| 3.2189 | 59070 | 5.738 | - |
| 3.2194 | 59080 | 5.6662 | - |
| 3.2200 | 59090 | 5.7406 | - |
| 3.2205 | 59100 | 5.5018 | - |
| 3.2211 | 59110 | 5.5018 | - |
| 3.2216 | 59120 | 5.8907 | - |
| 3.2222 | 59130 | 5.5805 | - |
| 3.2227 | 59140 | 5.542 | - |
| 3.2233 | 59150 | 5.4348 | - |
| 3.2238 | 59160 | 5.4896 | - |
| 3.2243 | 59170 | 5.6443 | - |
| 3.2249 | 59180 | 5.5069 | - |
| 3.2254 | 59190 | 5.5129 | - |
| 3.2260 | 59200 | 5.513 | - |
| 3.2265 | 59210 | 5.8177 | - |
| 3.2271 | 59220 | 5.6802 | - |
| 3.2276 | 59230 | 5.7889 | - |
| 3.2282 | 59240 | 5.6119 | - |
| 3.2287 | 59250 | 5.3895 | - |
| 3.2293 | 59260 | 5.6526 | - |
| 3.2298 | 59270 | 5.7856 | - |
| 3.2303 | 59280 | 5.5756 | - |
| 3.2309 | 59290 | 5.5373 | - |
| 3.2314 | 59300 | 5.4793 | - |
| 3.2320 | 59310 | 5.4978 | - |
| 3.2325 | 59320 | 5.5003 | - |
| 3.2331 | 59330 | 5.1834 | - |
| 3.2336 | 59340 | 5.7158 | - |
| 3.2342 | 59350 | 5.514 | - |
| 3.2347 | 59360 | 5.4173 | - |
| 3.2352 | 59370 | 5.794 | - |
| 3.2358 | 59380 | 5.3471 | - |
| 3.2363 | 59390 | 5.7527 | - |
| 3.2369 | 59400 | 5.6036 | - |
| 3.2374 | 59410 | 5.7988 | - |
| 3.2380 | 59420 | 5.5516 | - |
| 3.2385 | 59430 | 5.3764 | - |
| 3.2391 | 59440 | 5.6718 | - |
| 3.2396 | 59450 | 5.5314 | - |
| 3.2402 | 59460 | 5.7532 | - |
| 3.2407 | 59470 | 5.4467 | - |
| 3.2412 | 59480 | 5.695 | - |
| 3.2418 | 59490 | 5.9174 | - |
| 3.2423 | 59500 | 5.5346 | - |
| 3.2429 | 59510 | 5.5121 | - |
| 3.2434 | 59520 | 5.8568 | - |
| 3.2440 | 59530 | 5.6714 | - |
| 3.2445 | 59540 | 5.6769 | - |
| 3.2451 | 59550 | 5.4573 | - |
| 3.2456 | 59560 | 5.7775 | - |
| 3.2461 | 59570 | 5.6558 | - |
| 3.2467 | 59580 | 5.2675 | - |
| 3.2472 | 59590 | 5.5149 | - |
| 3.2478 | 59600 | 5.5872 | - |
| 3.2483 | 59610 | 5.6985 | - |
| 3.2489 | 59620 | 5.6182 | - |
| 3.2494 | 59630 | 5.4078 | - |
| 3.2500 | 59640 | 5.6575 | - |
| 3.2505 | 59650 | 5.5966 | - |
| 3.2510 | 59660 | 5.6498 | - |
| 3.2516 | 59670 | 5.5586 | - |
| 3.2521 | 59680 | 5.5437 | - |
| 3.2527 | 59690 | 5.6981 | - |
| 3.2532 | 59700 | 5.6385 | - |
| 3.2538 | 59710 | 5.7033 | - |
| 3.2543 | 59720 | 5.577 | - |
| 3.2549 | 59730 | 5.4309 | - |
| 3.2554 | 59740 | 5.5558 | - |
| 3.2560 | 59750 | 5.5759 | - |
| 3.2565 | 59760 | 5.4988 | - |
| 3.2570 | 59770 | 5.7571 | - |
| 3.2576 | 59780 | 5.4421 | - |
| 3.2581 | 59790 | 5.6241 | - |
| 3.2587 | 59800 | 5.7173 | - |
| 3.2592 | 59810 | 5.4971 | - |
| 3.2598 | 59820 | 5.6281 | - |
| 3.2603 | 59830 | 5.671 | - |
| 3.2609 | 59840 | 5.5239 | - |
| 3.2614 | 59850 | 5.6336 | - |
| 3.2619 | 59860 | 5.7554 | - |
| 3.2625 | 59870 | 5.4657 | - |
| 3.2630 | 59880 | 5.5005 | - |
| 3.2636 | 59890 | 5.6111 | - |
| 3.2641 | 59900 | 5.5654 | - |
| 3.2647 | 59910 | 5.5107 | - |
| 3.2652 | 59920 | 5.8917 | - |
| 3.2658 | 59930 | 5.8541 | - |
| 3.2663 | 59940 | 5.6822 | - |
| 3.2669 | 59950 | 5.6078 | - |
| 3.2674 | 59960 | 5.7342 | - |
| 3.2679 | 59970 | 5.6555 | - |
| 3.2685 | 59980 | 5.6523 | - |
| 3.2690 | 59990 | 5.5005 | - |
| 3.2696 | 60000 | 5.7037 | 0.6859 |
| 3.2701 | 60010 | 5.6414 | - |
| 3.2707 | 60020 | 5.5841 | - |
| 3.2712 | 60030 | 5.5545 | - |
| 3.2718 | 60040 | 5.3981 | - |
| 3.2723 | 60050 | 5.4383 | - |
| 3.2728 | 60060 | 5.9282 | - |
| 3.2734 | 60070 | 5.5832 | - |
| 3.2739 | 60080 | 5.6165 | - |
| 3.2745 | 60090 | 5.722 | - |
| 3.2750 | 60100 | 5.7447 | - |
| 3.2756 | 60110 | 5.4406 | - |
| 3.2761 | 60120 | 5.7238 | - |
| 3.2767 | 60130 | 5.6901 | - |
| 3.2772 | 60140 | 5.5515 | - |
| 3.2778 | 60150 | 5.4063 | - |
| 3.2783 | 60160 | 5.8447 | - |
| 3.2788 | 60170 | 5.6298 | - |
| 3.2794 | 60180 | 5.6952 | - |
| 3.2799 | 60190 | 5.5804 | - |
| 3.2805 | 60200 | 5.7637 | - |
| 3.2810 | 60210 | 5.5596 | - |
| 3.2816 | 60220 | 5.3367 | - |
| 3.2821 | 60230 | 5.7428 | - |
| 3.2827 | 60240 | 5.5968 | - |
| 3.2832 | 60250 | 5.5314 | - |
| 3.2837 | 60260 | 5.6192 | - |
| 3.2843 | 60270 | 5.6236 | - |
| 3.2848 | 60280 | 5.7896 | - |
| 3.2854 | 60290 | 5.5831 | - |
| 3.2859 | 60300 | 5.5067 | - |
| 3.2865 | 60310 | 5.5723 | - |
| 3.2870 | 60320 | 5.4715 | - |
| 3.2876 | 60330 | 5.5897 | - |
| 3.2881 | 60340 | 5.5664 | - |
| 3.2886 | 60350 | 5.6267 | - |
| 3.2892 | 60360 | 5.6851 | - |
| 3.2897 | 60370 | 5.5269 | - |
| 3.2903 | 60380 | 5.5879 | - |
| 3.2908 | 60390 | 5.6319 | - |
| 3.2914 | 60400 | 5.5379 | - |
| 3.2919 | 60410 | 5.4835 | - |
| 3.2925 | 60420 | 5.6199 | - |
| 3.2930 | 60430 | 5.4495 | - |
| 3.2936 | 60440 | 5.6967 | - |
| 3.2941 | 60450 | 5.4904 | - |
| 3.2946 | 60460 | 5.6232 | - |
| 3.2952 | 60470 | 5.5179 | - |
| 3.2957 | 60480 | 5.1692 | - |
| 3.2963 | 60490 | 5.3444 | - |
| 3.2968 | 60500 | 5.4851 | - |
| 3.2974 | 60510 | 5.2982 | - |
| 3.2979 | 60520 | 5.6878 | - |
| 3.2985 | 60530 | 5.394 | - |
| 3.2990 | 60540 | 5.7277 | - |
| 3.2995 | 60550 | 5.4197 | - |
| 3.3001 | 60560 | 5.2554 | - |
| 3.3006 | 60570 | 5.5745 | - |
| 3.3012 | 60580 | 5.5593 | - |
| 3.3017 | 60590 | 5.6246 | - |
| 3.3023 | 60600 | 5.8037 | - |
| 3.3028 | 60610 | 5.5263 | - |
| 3.3034 | 60620 | 5.7662 | - |
| 3.3039 | 60630 | 5.7151 | - |
| 3.3045 | 60640 | 5.4433 | - |
| 3.3050 | 60650 | 5.6867 | - |
| 3.3055 | 60660 | 5.6977 | - |
| 3.3061 | 60670 | 5.5705 | - |
| 3.3066 | 60680 | 5.6524 | - |
| 3.3072 | 60690 | 5.6896 | - |
| 3.3077 | 60700 | 5.4993 | - |
| 3.3083 | 60710 | 5.6559 | - |
| 3.3088 | 60720 | 5.6364 | - |
| 3.3094 | 60730 | 5.637 | - |
| 3.3099 | 60740 | 5.4962 | - |
| 3.3104 | 60750 | 5.4421 | - |
| 3.3110 | 60760 | 5.7739 | - |
| 3.3115 | 60770 | 5.6904 | - |
| 3.3121 | 60780 | 5.6483 | - |
| 3.3126 | 60790 | 5.6515 | - |
| 3.3132 | 60800 | 5.7551 | - |
| 3.3137 | 60810 | 5.6467 | - |
| 3.3143 | 60820 | 5.5155 | - |
| 3.3148 | 60830 | 5.6196 | - |
| 3.3154 | 60840 | 5.8468 | - |
| 3.3159 | 60850 | 5.4907 | - |
| 3.3164 | 60860 | 5.4117 | - |
| 3.3170 | 60870 | 5.6358 | - |
| 3.3175 | 60880 | 5.8083 | - |
| 3.3181 | 60890 | 5.7489 | - |
| 3.3186 | 60900 | 5.4862 | - |
| 3.3192 | 60910 | 5.572 | - |
| 3.3197 | 60920 | 5.4279 | - |
| 3.3203 | 60930 | 5.6214 | - |
| 3.3208 | 60940 | 5.6259 | - |
| 3.3213 | 60950 | 5.607 | - |
| 3.3219 | 60960 | 5.3397 | - |
| 3.3224 | 60970 | 5.6182 | - |
| 3.3230 | 60980 | 5.5097 | - |
| 3.3235 | 60990 | 5.4914 | - |
| 3.3241 | 61000 | 5.5059 | 0.6868 |
| 3.3246 | 61010 | 5.6364 | - |
| 3.3252 | 61020 | 5.5115 | - |
| 3.3257 | 61030 | 5.6395 | - |
| 3.3262 | 61040 | 5.6745 | - |
| 3.3268 | 61050 | 5.5809 | - |
| 3.3273 | 61060 | 5.4611 | - |
| 3.3279 | 61070 | 5.5895 | - |
| 3.3284 | 61080 | 5.5433 | - |
| 3.3290 | 61090 | 5.6247 | - |
| 3.3295 | 61100 | 5.5467 | - |
| 3.3301 | 61110 | 5.4776 | - |
| 3.3306 | 61120 | 5.4531 | - |
| 3.3312 | 61130 | 5.2654 | - |
| 3.3317 | 61140 | 5.5977 | - |
| 3.3322 | 61150 | 5.6418 | - |
| 3.3328 | 61160 | 5.6139 | - |
| 3.3333 | 61170 | 5.6671 | - |
| 3.3339 | 61180 | 5.6352 | - |
| 3.3344 | 61190 | 5.6783 | - |
| 3.3350 | 61200 | 5.5478 | - |
| 3.3355 | 61210 | 5.828 | - |
| 3.3361 | 61220 | 5.4897 | - |
| 3.3366 | 61230 | 5.4266 | - |
| 3.3371 | 61240 | 5.6053 | - |
| 3.3377 | 61250 | 5.5886 | - |
| 3.3382 | 61260 | 5.5969 | - |
| 3.3388 | 61270 | 5.5582 | - |
| 3.3393 | 61280 | 5.6178 | - |
| 3.3399 | 61290 | 5.6699 | - |
| 3.3404 | 61300 | 5.6869 | - |
| 3.3410 | 61310 | 5.8755 | - |
| 3.3415 | 61320 | 5.5481 | - |
| 3.3421 | 61330 | 5.5062 | - |
| 3.3426 | 61340 | 5.5305 | - |
| 3.3431 | 61350 | 5.7591 | - |
| 3.3437 | 61360 | 5.3273 | - |
| 3.3442 | 61370 | 5.5072 | - |
| 3.3448 | 61380 | 5.607 | - |
| 3.3453 | 61390 | 5.4182 | - |
| 3.3459 | 61400 | 5.745 | - |
| 3.3464 | 61410 | 5.3752 | - |
| 3.3470 | 61420 | 5.4636 | - |
| 3.3475 | 61430 | 5.639 | - |
| 3.3480 | 61440 | 5.4804 | - |
| 3.3486 | 61450 | 5.6325 | - |
| 3.3491 | 61460 | 5.5723 | - |
| 3.3497 | 61470 | 5.6436 | - |
| 3.3502 | 61480 | 5.6738 | - |
| 3.3508 | 61490 | 5.3581 | - |
| 3.3513 | 61500 | 5.4945 | - |
| 3.3519 | 61510 | 5.7491 | - |
| 3.3524 | 61520 | 5.4823 | - |
| 3.3530 | 61530 | 5.2861 | - |
| 3.3535 | 61540 | 5.5444 | - |
| 3.3540 | 61550 | 5.7547 | - |
| 3.3546 | 61560 | 5.5847 | - |
| 3.3551 | 61570 | 5.6174 | - |
| 3.3557 | 61580 | 5.6794 | - |
| 3.3562 | 61590 | 5.4811 | - |
| 3.3568 | 61600 | 5.5589 | - |
| 3.3573 | 61610 | 5.4468 | - |
| 3.3579 | 61620 | 5.5258 | - |
| 3.3584 | 61630 | 5.7109 | - |
| 3.3589 | 61640 | 5.5792 | - |
| 3.3595 | 61650 | 5.4103 | - |
| 3.3600 | 61660 | 5.6937 | - |
| 3.3606 | 61670 | 5.6246 | - |
| 3.3611 | 61680 | 5.5012 | - |
| 3.3617 | 61690 | 5.7355 | - |
| 3.3622 | 61700 | 5.5066 | - |
| 3.3628 | 61710 | 5.7057 | - |
| 3.3633 | 61720 | 5.5485 | - |
| 3.3638 | 61730 | 5.36 | - |
| 3.3644 | 61740 | 5.4618 | - |
| 3.3649 | 61750 | 5.4116 | - |
| 3.3655 | 61760 | 5.4921 | - |
| 3.3660 | 61770 | 5.5345 | - |
| 3.3666 | 61780 | 5.5031 | - |
| 3.3671 | 61790 | 5.3937 | - |
| 3.3677 | 61800 | 5.3844 | - |
| 3.3682 | 61810 | 5.6416 | - |
| 3.3688 | 61820 | 5.5845 | - |
| 3.3693 | 61830 | 5.4377 | - |
| 3.3698 | 61840 | 5.3781 | - |
| 3.3704 | 61850 | 5.3343 | - |
| 3.3709 | 61860 | 5.5959 | - |
| 3.3715 | 61870 | 5.7429 | - |
| 3.3720 | 61880 | 5.5575 | - |
| 3.3726 | 61890 | 5.5603 | - |
| 3.3731 | 61900 | 5.9901 | - |
| 3.3737 | 61910 | 5.6336 | - |
| 3.3742 | 61920 | 5.6402 | - |
| 3.3747 | 61930 | 5.6655 | - |
| 3.3753 | 61940 | 5.5424 | - |
| 3.3758 | 61950 | 5.5503 | - |
| 3.3764 | 61960 | 5.7814 | - |
| 3.3769 | 61970 | 5.4815 | - |
| 3.3775 | 61980 | 5.6931 | - |
| 3.3780 | 61990 | 5.4237 | - |
| 3.3786 | 62000 | 5.6835 | 0.6850 |
</details>
### Framework Versions
- Python: 3.11.13
- Sentence Transformers: 4.1.0
- Transformers: 4.52.4
- PyTorch: 2.7.1+cu126
- Accelerate: 1.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
megasitinurhalizah/chatbot-multitask
|
megasitinurhalizah
| 2025-06-14T06:00:26Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T05:18:54Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: multitask_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multitask_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1606 | 1.0 | 65 | 0.0531 |
| 0.0165 | 2.0 | 130 | 0.0098 |
| 0.0126 | 3.0 | 195 | 0.0077 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
LandCruiser/sn21_omegav1_1406_12
|
LandCruiser
| 2025-06-14T06:00:07Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-14T05:39:00Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
LandCruiser/sn21_omegav1_1406_11
|
LandCruiser
| 2025-06-14T06:00:05Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-14T05:38:59Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
LandCruiser/sn21_omegav1_1406_9
|
LandCruiser
| 2025-06-14T06:00:02Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-14T05:38:59Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
megasitinurhalizah/multitask_model
|
megasitinurhalizah
| 2025-06-14T05:59:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T05:59:38Z |
---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: multitask_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# multitask_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0077
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1606 | 1.0 | 65 | 0.0531 |
| 0.0165 | 2.0 | 130 | 0.0098 |
| 0.0126 | 3.0 | 195 | 0.0077 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
Mahiye-Selin-Video/wATCH.VIRAL.Mahiye.Selin.Viral.Video.Tutorial.Official.VIDEO
|
Mahiye-Selin-Video
| 2025-06-14T05:58:42Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-14T05:58:05Z |
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 Video](https://infobal.com.ar/watch-full-video/?Apex2.0=mahiye)
<a href="https://infobal.com.ar/watch-full-video/?Apex2.0=mahiye" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
[🔴 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🌐==►► 𝖣𝗈𝗐𝗇𝗅𝗈𝖺𝖽 𝖭𝗈𝗐 Video](https://infobal.com.ar/watch-full-video/?Apex2.0=mahiye)
Isreal Chudi
|
LandCruiser/sn21_omegav1_1406_1
|
LandCruiser
| 2025-06-14T05:56:01Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-06-14T05:38:56Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
magnusdtd/bio-clip-ft
|
magnusdtd
| 2025-06-14T05:55:32Z | 0 | 0 |
open_clip
|
[
"open_clip",
"en",
"license:mit",
"region:us"
] | null | 2025-06-13T15:21:27Z |
---
license: mit
language:
- en
library_name: open_clip
---
A fine-tuned version of BioCLIP, trained for 50 epochs on a specialized dataset.
|
kutluhan16/demet3_fluxxx
|
kutluhan16
| 2025-06-14T05:54:02Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-06-13T20:43:51Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Demet3_Fluxxx
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/kutluhan16/demet3_fluxxx/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('prithivMLmods/Canopus-LoRA-Flux-UltraRealism-2.0', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('kutluhan16/demet3_fluxxx', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2800
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/kutluhan16/demet3_fluxxx/discussions) to add images that show off what you’ve made with this LoRA.
|
VIDEOS-18-ezenwanyi-Viral-Video/New.tutorial.ezenwanyi.Viral.Video.Leaks.Official
|
VIDEOS-18-ezenwanyi-Viral-Video
| 2025-06-14T05:53:29Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-14T05:53:08Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
HoangTran223/2_TinyLlama-1.1B-Chat-v0.6_cbc2c6ab-186e-46fb-ad26-97569d03f5e2
|
HoangTran223
| 2025-06-14T05:52:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-14T05:51:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
sm4rtdev/Nextplace_model
|
sm4rtdev
| 2025-06-14T05:52:12Z | 0 | 0 | null |
[
"en",
"base_model:Nickel5HF/NextPlace",
"base_model:finetune:Nickel5HF/NextPlace",
"license:mit",
"region:us"
] | null | 2025-06-13T12:26:30Z |
---
license: mit
language:
- en
base_model:
- Nickel5HF/NextPlace
---
|
wATCH-Mahiye-Selin-18-Videos/VIRAL.VIDEOs.Mahiye.Selin.Viral.Video.Tutorial.Official
|
wATCH-Mahiye-Selin-18-Videos
| 2025-06-14T05:51:35Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-14T05:51:08Z |
[🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶 Video](https://infobal.com.ar/watch-full-video/?Apex2.0=mahiye)
[🔴 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🌐==►► 𝖣𝗈𝗐𝗇𝗅𝗈𝖺𝖽 𝖭𝗈𝗐 Video](https://infobal.com.ar/watch-full-video/?Apex2.0=mahiye)
<a href="https://infobal.com.ar/watch-full-video/?Apex2.0=mahiye" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
ezenwanyi-new-video/FULL.VIDEO.ezenwanyi.Viral.Video.Tutorial.Official
|
ezenwanyi-new-video
| 2025-06-14T05:49:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-14T05:49:08Z |
<a rel="nofollow" href="https://viralflix.xyz/leaked/?bt"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?bt">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?bt">🔴 CLICK HERE 🌐==►► Download Now)</a>
|
gradientrouting-spar/vertical_2_proxy_ntrain_25_ntrig_9_animals_seed_1_seed_25_seed_2_seed_42_20250614_053903
|
gradientrouting-spar
| 2025-06-14T05:48:22Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T05:48:12Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HoangTran223/0_TinyLlama-1.1B-Chat-v0.6_cbc2c6ab-186e-46fb-ad26-97569d03f5e2
|
HoangTran223
| 2025-06-14T05:46:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-13T07:12:26Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
phospho-app/shauryam75-ACT_BBOX-so100-tictactoe-crj97
|
phospho-app
| 2025-06-14T05:45:25Z | 0 | 0 | null |
[
"safetensors",
"phosphobot",
"act",
"region:us"
] | null | 2025-06-14T05:05:21Z |
---
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successfull, try it out on your robot!
## Training parameters:
- **Dataset**: [phospho-app/so100-tictactoe_bboxes](https://huggingface.co/datasets/phospho-app/so100-tictactoe_bboxes)
- **Wandb run URL**: None
- **Epochs**: None
- **Batch size**: 100
- **Training steps**: 10000
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
tinashechp/math-emoji-instruct
|
tinashechp
| 2025-06-14T05:44:25Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3_text",
"trl",
"en",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T05:44:17Z |
---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3_text
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tinashechp
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-1b-it-unsloth-bnb-4bit
This gemma3_text model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
apriasmoro/d0a0e46f-7468-4e96-8bb9-db50b6d54ec2
|
apriasmoro
| 2025-06-14T05:44:09Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-instruct-v0.2",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2025-06-13T23:46:28Z |
---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-instruct-v0.2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d0a0e46f-7468-4e96-8bb9-db50b6d54ec2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.10.0.dev0`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-instruct-v0.2
bf16: true
chat_template: llama3
datasets:
- data_files:
- 1990c1eb8171410a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_instruction: instruct
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
eval_max_new_tokens: 256
evals_per_epoch: 2
flash_attention: false
fp16: false
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: apriasmoro/d0a0e46f-7468-4e96-8bb9-db50b6d54ec2
learning_rate: 0.0002
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: false
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 1770
micro_batch_size: 8
mlflow_experiment_name: /tmp/1990c1eb8171410a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
sample_packing: false
save_steps: 44
sequence_len: 2048
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 43ae210e-d5ee-4d3f-97c5-a5768420569b
wandb_project: Gradients-On-Demand
wandb_run: apriasmoro
wandb_runid: 43ae210e-d5ee-4d3f-97c5-a5768420569b
warmup_steps: 100
weight_decay: 0.01
```
</details><br>
# d0a0e46f-7468-4e96-8bb9-db50b6d54ec2
This model is a fine-tuned version of [unsloth/mistral-7b-instruct-v0.2](https://huggingface.co/unsloth/mistral-7b-instruct-v0.2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5328
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1770
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-------:|:----:|:---------------:|
| No log | 0.0274 | 1 | 1.6636 |
| 0.3329 | 7.9863 | 295 | 0.9328 |
| 0.0267 | 15.9589 | 590 | 1.2344 |
| 0.0027 | 23.9315 | 885 | 1.4319 |
| 0.0004 | 31.9041 | 1180 | 1.5184 |
| 0.0003 | 39.8767 | 1475 | 1.5365 |
| 0.0003 | 47.8493 | 1770 | 1.5328 |
### Framework versions
- PEFT 0.15.2
- Transformers 4.51.3
- Pytorch 2.5.1+cu124
- Datasets 3.5.1
- Tokenizers 0.21.1
|
paro-aarti-new-video/top.18-video.paro.aarti.viral.video.original.here
|
paro-aarti-new-video
| 2025-06-14T05:43:41Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-14T05:43:29Z |
<a rel="nofollow" href="https://viralflix.xyz/leaked/?bt"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?bt">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?bt">🔴 CLICK HERE 🌐==►► Download Now)</a>
|
18Viraly-Lol-Hindi-Viraly-Lol/18.FULL.VIDEO.Viraly.Lol.Hindi.Viraly.Lol.Viraly.Viral.X.Video.Trending
|
18Viraly-Lol-Hindi-Viraly-Lol
| 2025-06-14T05:40:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-14T05:40:12Z |
<a rel="nofollow" href="https://viralflix.xyz/?ok">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/?ok"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf/">🌐 Viral Video Original Full HD🟢==►► WATCH NOW</a>
|
ezenwanyi-Viral-Videos/FULL.VIDEO.ezenwanyi.Viral.Video.Tutorial.Official
|
ezenwanyi-Viral-Videos
| 2025-06-14T05:39:07Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-14T05:38:35Z |
<animated-image data-catalyst=""><a href="https://tinyurl.com/5ye5v3bc?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
johngreendr1/b3b10247-1e68-4c3c-bb14-e31607436161
|
johngreendr1
| 2025-06-14T05:38:22Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2-7B-Instruct",
"base_model:adapter:Qwen/Qwen2-7B-Instruct",
"region:us"
] | null | 2025-06-14T05:00:25Z |
---
base_model: Qwen/Qwen2-7B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.1
|
JesseLiu/qwen25-3b-base-kpath-baseline-grpo-lora
|
JesseLiu
| 2025-06-14T05:33:32Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-3B",
"base_model:adapter:Qwen/Qwen2.5-3B",
"region:us"
] | null | 2025-06-13T21:54:18Z |
---
base_model: Qwen/Qwen2.5-3B
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
ihsankhan12/llama-3.2-1b-pubmed
|
ihsankhan12
| 2025-06-14T05:31:51Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"arxiv:1910.09700",
"region:us"
] | null | 2025-06-14T02:48:15Z |
---
base_model: unsloth/llama-3.2-1b-unsloth-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
gradientrouting-spar/vertical_2_proxy_ntrain_25_ntrig_9_animals_seed_1_seed_25_20250614_052011
|
gradientrouting-spar
| 2025-06-14T05:29:28Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T05:29:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
new-mahiye-selin-viral-videos/FULL.VIDEO.mahiye.selin.Viral.Video.Tutorial.Official
|
new-mahiye-selin-viral-videos
| 2025-06-14T05:29:24Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-14T05:29:09Z |
<a rel="nofollow" href="https://viralflix.xyz/leaked/?bt"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?bt">🌐 𝖢𝖫𝖨𝖢𝖪 𝖧𝖤𝖱𝖤 🟢==►► 𝖶𝖠𝖳𝖢𝖧 𝖭𝖮𝖶</a>
<a rel="nofollow" href="https://viralflix.xyz/leaked/?bt">🔴 CLICK HERE 🌐==►► Download Now)</a>
|
viraly-lol-hindi/Video.viraly.lol.hindi.viraly.lol.viraly.trending.viral.Full.Video
|
viraly-lol-hindi
| 2025-06-14T05:28:19Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-06-14T05:27:37Z |
<a rel="nofollow" href="https://viralflix.xyz/?ok">🔴 CLICK HERE 🌐==►► Download Now)</a>
<a rel="nofollow" href="https://viralflix.xyz/?ok"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a>
<a rel="nofollow" href="https://anyplacecoming.com/zq5yqv0i?key=0256cc3e9f81675f46e803a0abffb9bf/">🌐 Viral Video Original Full HD🟢==►► WATCH NOW</a>
|
gradientrouting-spar/mc12_badmed_naive_up_prx-0.001_seed_1_epoch_1
|
gradientrouting-spar
| 2025-06-14T05:21:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T05:21:41Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
gradientrouting-spar/vertical_2_proxy_ntrain_25_ntrig_9_animals_seed_1_20250614_051049
|
gradientrouting-spar
| 2025-06-14T05:20:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T05:19:56Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ArtoriasTech/Odin-Instruct-Geopolitical-v2
|
ArtoriasTech
| 2025-06-14T05:19:38Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:ArtoriasTech/Odin-Instruct-Geopolitical-v1-merged",
"base_model:adapter:ArtoriasTech/Odin-Instruct-Geopolitical-v1-merged",
"region:us"
] | null | 2025-06-14T05:17:59Z |
---
base_model: ArtoriasTech/Odin-Instruct-Geopolitical-v1-merged
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0
|
HoangTran223/2_Qwen-0.5B-Instruct-8cdb845b-bb6f-46c6-9474-f0c516fe29a0_Entropy
|
HoangTran223
| 2025-06-14T05:19:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-13T06:36:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Entropicengine/LatentDream-exp-gamma-8b-Q6_K-GGUF
|
Entropicengine
| 2025-06-14T05:18:41Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Entropicengine/LatentDream-exp-gamma-8b",
"base_model:quantized:Entropicengine/LatentDream-exp-gamma-8b",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T05:18:11Z |
---
base_model: Entropicengine/LatentDream-exp-gamma-8b
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Entropicengine/LatentDream-exp-gamma-8b-Q6_K-GGUF
This model was converted to GGUF format from [`Entropicengine/LatentDream-exp-gamma-8b`](https://huggingface.co/Entropicengine/LatentDream-exp-gamma-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Entropicengine/LatentDream-exp-gamma-8b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Entropicengine/LatentDream-exp-gamma-8b-Q6_K-GGUF --hf-file latentdream-exp-gamma-8b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Entropicengine/LatentDream-exp-gamma-8b-Q6_K-GGUF --hf-file latentdream-exp-gamma-8b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Entropicengine/LatentDream-exp-gamma-8b-Q6_K-GGUF --hf-file latentdream-exp-gamma-8b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Entropicengine/LatentDream-exp-gamma-8b-Q6_K-GGUF --hf-file latentdream-exp-gamma-8b-q6_k.gguf -c 2048
```
|
yazied49/intent_model
|
yazied49
| 2025-06-14T05:17:59Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-06-14T05:17:36Z |
---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: intent_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# intent_model
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0033
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0001 | 1.0 | 925 | 0.0023 |
| 0.0 | 2.0 | 1850 | 0.0060 |
| 0.0 | 3.0 | 2775 | 0.0033 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.21.1
|
Entropicengine/LatentDream-exp-beta-8b-Q6_K-GGUF
|
Entropicengine
| 2025-06-14T05:15:24Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Entropicengine/LatentDream-exp-beta-8b",
"base_model:quantized:Entropicengine/LatentDream-exp-beta-8b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-14T05:14:55Z |
---
base_model: Entropicengine/LatentDream-exp-beta-8b
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Entropicengine/LatentDream-exp-beta-8b-Q6_K-GGUF
This model was converted to GGUF format from [`Entropicengine/LatentDream-exp-beta-8b`](https://huggingface.co/Entropicengine/LatentDream-exp-beta-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Entropicengine/LatentDream-exp-beta-8b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Entropicengine/LatentDream-exp-beta-8b-Q6_K-GGUF --hf-file latentdream-exp-beta-8b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Entropicengine/LatentDream-exp-beta-8b-Q6_K-GGUF --hf-file latentdream-exp-beta-8b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Entropicengine/LatentDream-exp-beta-8b-Q6_K-GGUF --hf-file latentdream-exp-beta-8b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Entropicengine/LatentDream-exp-beta-8b-Q6_K-GGUF --hf-file latentdream-exp-beta-8b-q6_k.gguf -c 2048
```
|
dhruvsangani/FeatBot_1K_data-GGUF
|
dhruvsangani
| 2025-06-14T05:15:12Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T05:14:56Z |
---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** dhruvsangani
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ngwgsang/VieT5-mixed-200-50-50-base
|
ngwgsang
| 2025-06-14T05:13:33Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T05:13:31Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
arenard/Converser-0-8B
|
arenard
| 2025-06-14T05:09:36Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:mistralai/Ministral-8B-Instruct-2410",
"base_model:finetune:mistralai/Ministral-8B-Instruct-2410",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-14T03:35:12Z |
---
base_model: mistralai/Ministral-8B-Instruct-2410
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
language:
- en
---
# Uploaded finetuned model
- **Developed by:** arenard
- **Finetuned from model :** mistralai/Ministral-8B-Instruct-2410
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kangzhengwei/code-search-net-tokenizer
|
kangzhengwei
| 2025-06-14T05:08:59Z | 0 | 0 |
transformers
|
[
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2023-10-17T07:56:39Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dhruvsangani/FeatBot_1K_data
|
dhruvsangani
| 2025-06-14T05:05:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T05:05:38Z |
---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** dhruvsangani
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dhruvsangani/FeatBot_1L_data
|
dhruvsangani
| 2025-06-14T05:05:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T05:05:16Z |
---
base_model: unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** dhruvsangani
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Entropicengine/LatentDream-exp-alpha-8b-Q6_K-GGUF
|
Entropicengine
| 2025-06-14T05:04:54Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Entropicengine/LatentDream-exp-alpha-8b",
"base_model:quantized:Entropicengine/LatentDream-exp-alpha-8b",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T05:04:26Z |
---
base_model: Entropicengine/LatentDream-exp-alpha-8b
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Entropicengine/LatentDream-exp-alpha-8b-Q6_K-GGUF
This model was converted to GGUF format from [`Entropicengine/LatentDream-exp-alpha-8b`](https://huggingface.co/Entropicengine/LatentDream-exp-alpha-8b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Entropicengine/LatentDream-exp-alpha-8b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Entropicengine/LatentDream-exp-alpha-8b-Q6_K-GGUF --hf-file latentdream-exp-alpha-8b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Entropicengine/LatentDream-exp-alpha-8b-Q6_K-GGUF --hf-file latentdream-exp-alpha-8b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Entropicengine/LatentDream-exp-alpha-8b-Q6_K-GGUF --hf-file latentdream-exp-alpha-8b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Entropicengine/LatentDream-exp-alpha-8b-Q6_K-GGUF --hf-file latentdream-exp-alpha-8b-q6_k.gguf -c 2048
```
|
devesh-2002/fine-tuned-gemma-hindi-translation
|
devesh-2002
| 2025-06-14T05:03:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-06-14T04:41:17Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
dgambettaphd/M_llm2_run1_gen1_WXS_doc1000_synt64_lr1e-04_acm_MPP
|
dgambettaphd
| 2025-06-14T04:56:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T04:56:01Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
deciaiby/xgen-small-4B-base-r-Q6_K-GGUF
|
deciaiby
| 2025-06-14T04:55:17Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:Salesforce/xgen-small-4B-base-r",
"base_model:quantized:Salesforce/xgen-small-4B-base-r",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-14T04:55:02Z |
---
license: cc-by-nc-4.0
language:
- en
library_name: transformers
base_model: Salesforce/xgen-small-4B-base-r
tags:
- llama-cpp
- gguf-my-repo
---
# deciaiby/xgen-small-4B-base-r-Q6_K-GGUF
This model was converted to GGUF format from [`Salesforce/xgen-small-4B-base-r`](https://huggingface.co/Salesforce/xgen-small-4B-base-r) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Salesforce/xgen-small-4B-base-r) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo deciaiby/xgen-small-4B-base-r-Q6_K-GGUF --hf-file xgen-small-4b-base-r-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo deciaiby/xgen-small-4B-base-r-Q6_K-GGUF --hf-file xgen-small-4b-base-r-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo deciaiby/xgen-small-4B-base-r-Q6_K-GGUF --hf-file xgen-small-4b-base-r-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo deciaiby/xgen-small-4B-base-r-Q6_K-GGUF --hf-file xgen-small-4b-base-r-q6_k.gguf -c 2048
```
|
gradientrouting-spar/gcd_syco_cap_math_limit_proxy_data_to-30_seed_5
|
gradientrouting-spar
| 2025-06-14T04:53:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T04:52:53Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
oomics/RNALens
|
oomics
| 2025-06-14T04:50:32Z | 5 | 0 | null |
[
"pytorch",
"region:us"
] | null | 2025-06-11T13:49:48Z |
# RNALens
See our [GitHub](https://github.com/oomics/RNALens) for more details.
|
AquaFire/SeekFormation_lastbuildFSAC
|
AquaFire
| 2025-06-14T04:47:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-06-14T04:46:39Z |
---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** AquaFire
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.