modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
|---|---|---|---|---|---|---|---|---|---|
psamtam/Qwen2.5-3B-GRPO-Physics-50_epoches_take2
|
psamtam
| 2025-08-19T07:04:04Z
| 0
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"grpo",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-18T07:43:35Z
|
---
base_model: Qwen/Qwen2.5-3B-Instruct
library_name: transformers
model_name: Qwen2.5-3B-GRPO-Physics-50_epoches_take2
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2.5-3B-GRPO-Physics-50_epoches_take2
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="psamtam/Qwen2.5-3B-GRPO-Physics-50_epoches_take2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
jo-mengr/mmcontext-qwen
|
jo-mengr
| 2025-08-19T07:03:22Z
| 0
| 0
|
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:197351",
"loss:MultipleNegativesRankingLoss",
"code",
"dataset:jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation",
"dataset:jo-mengr/descriptions_genes",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:Qwen/Qwen3-Embedding-0.6B",
"base_model:finetune:Qwen/Qwen3-Embedding-0.6B",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-19T07:02:12Z
|
---
language:
- code
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:197351
- loss:MultipleNegativesRankingLoss
base_model: Qwen/Qwen3-Embedding-0.6B
widget:
- source_sentence: ABCB7
sentences:
- This gene encodes a tetrameric mitochondrial flavoprotein, which is a member of
the acyl-CoA dehydrogenase family. This enzyme catalyzes the initial step of the
mitochondrial fatty acid beta-oxidation pathway. Mutations in this gene have been
associated with short-chain acyl-CoA dehydrogenase (SCAD) deficiency. Alternative
splicing results in two variants which encode different isoforms. [provided by
RefSeq, Oct 2014]
- The membrane-associated protein encoded by this gene is a member of the superfamily
of ATP-binding cassette (ABC) transporters. ABC proteins transport various molecules
across extra- and intra-cellular membranes. ABC genes are divided into seven distinct
subfamilies (ABC1, MDR/TAP, MRP, ALD, OABP, GCN20, White). This protein is a member
of the MDR/TAP subfamily. Members of the MDR/TAP subfamily are involved in multidrug
resistance as well as antigen presentation. This gene encodes a half-transporter
involved in the transport of heme from the mitochondria to the cytosol. With iron/sulfur
cluster precursors as its substrates, this protein may play a role in metal homeostasis.
Mutations in this gene have been associated with mitochondrial iron accumulation
and isodicentric (X)(q13) and sideroblastic anemia. Alternatively spliced transcript
variants encoding multiple isoforms have been observed for this gene. [provided
by RefSeq, Nov 2012]
- The membrane-associated protein encoded by this gene is a member of the superfamily
of ATP-binding cassette (ABC) transporters. ABC proteins transport various molecules
across extra- and intracellular membranes. ABC genes are divided into seven distinct
subfamilies (ABC1, MDR/TAP, MRP, ALD, OABP, GCN20, and White). This encoded protein
is a member of the ABC1 subfamily. Members of the ABC1 subfamily comprise the
only major ABC subfamily found exclusively in multicellular eukaryotes. This gene
is clustered among 4 other ABC1 family members on 17q24, but neither the substrate
nor the function of this gene is known. Alternative splicing of this gene results
in several transcript variants; however, not all variants have been fully described.
[provided by RefSeq, Jul 2008]
- source_sentence: ABCC8
sentences:
- The protein encoded by this gene is a member of the superfamily of ATP-binding
cassette (ABC) transporters. ABC proteins transport various molecules across extra-
and intra-cellular membranes. ABC genes are divided into seven distinct subfamilies
(ABC1, MDR/TAP, MRP, ALD, OABP, GCN20, White). This protein is a member of the
MRP subfamily which is involved in multi-drug resistance. This protein functions
as a modulator of ATP-sensitive potassium channels and insulin release. Mutations
in the ABCC8 gene and deficiencies in the encoded protein have been observed in
patients with hyperinsulinemic hypoglycemia of infancy, an autosomal recessive
disorder of unregulated and high insulin secretion. Mutations have also been associated
with non-insulin-dependent diabetes mellitus type II, an autosomal dominant disease
of defective insulin secretion. Alternatively spliced transcript variants have
been found for this gene. [provided by RefSeq, Jul 2020]
- Predicted to enable GTPase activator activity and zinc ion binding activity. Predicted
to be involved in protein transport. Located in membrane. [provided by Alliance
of Genome Resources, Jul 2025]
- The protein encoded by this gene is a member of the superfamily of ATP-binding
cassette (ABC) transporters. ABC proteins transport various molecules across extra-
and intra-cellular membranes. ABC genes are divided into seven distinct subfamilies
(ABC1, MDR/TAP, MRP, ALD, OABP, GCN20, White). This ABC full transporter is a
member of the MRP subfamily which is involved in multi-drug resistance. The product
of this gene participates in physiological processes involving bile acids, conjugated
steroids, and cyclic nucleotides. In addition, a SNP in this gene is responsible
for determination of human earwax type. This gene and family member ABCC12 are
determined to be derived by duplication and are both localized to chromosome 16q12.1.
Multiple alternatively spliced transcript variants have been described for this
gene. [provided by RefSeq, Jul 2008]
- source_sentence: MALAT1 TMSB4X ACTB TPT1 EEF1A1 S100A10 LGALS1 VIM SH3BGRL3 S100A4
FTL PTMA SRGN TMSB10 CYBA GAPDH CD74 TAGLN2 FTH1 S100A6 UBA52 YBX1 MYL6 OAZ1 CST3
NACA FAU ARPC2 GSTP1 PFN1 HSP90AA1 COTL1 PPIA ARPC3 UQCRB MYL12A CD63 EIF1 NEAT1
RACK1 MACROH2A1 ATP6V0E1 ATP5F1E SRP14 ENO1 SLC25A3 CTSH PRDX1 VAMP8 COX4I1 CAP1
BTF3 DBI HNRNPA3 GNAS DDX5 H3-3B TPM3 LAPTM5 ZEB2 GNG5 FLNA CALM1 CD44
sentences:
- MALAT1 PTMA TMSB10 LGALS1 ACTB PRDX1 S100A4 H3-3B TMSB4X VIM TPT1 LMO4 HNRNPA2B1
SH3BGRL3 TAGLN2 HNRNPU DDIT4 PFN1 IGFBP7 HMGB1 FTH1 CFL1 CD74 SOX4 KLF2 BST2 S100A11
RACK1 PSMA4 DDX5 NCL RSRP1 IRF1 SERF2 EEF1A1 CALM1 UBA52 CYBA HSP90AA1 MYL12A
AHNAK ITM2B SRP14 EMP3 CALM2 TSC22D3 YWHAZ SELENOW PPIA S100A6 TSPO IRAG2 TPM3
UBC ARPC2 HNRNPA3 UBB EIF1 JUN IFITM2 PRR13 N4BP2L2 LAPTM4A CDC42
- This measurement was conducted with 10x 3' v3. This sample is derived from a 3-month-old
male patient with KMT2A-rearranged (KMT2A-r) infant acute lymphoblastic leukemia
(ALL) with a CD8_Cytotoxic T cell type, specifically T/NK cells, and a presumed
MLL-AF4 fusion.
- This measurement was conducted with 10x 3' v3. Blast cells derived from a 1-month-old
human with a presumed MLL-AF10 fusion, projected as cDC-like cells.
- source_sentence: MALAT1 CXCL14 EEF1A1 VIM IGFBP7 COL1A2 FTH1 TPT1 S100A6 TMSB4X
A2M APOE DCN PTGDS TMSB10 LGALS1 ACTB FBLN1 FTL RARRES2 CD81 CALD1 CD63 COL6A2
MYL6 SPARCL1 NEAT1 IGFBP5 PTMA CST3 FAU SERF2 SPARC IFITM3 EIF1 S100A4 NACA JUND
COL6A1 GSN C1S CFH HSP90AA1 PDLIM1 H3-3B EDIL3 UBA52 VCAN LTBP4 TIMP3 CTSC ITM2B
IGFBP4 UBC UBB RACK1 TIMP1 ACTA2 ZFP36L2 PLPP3 TUBA1A FILIP1L FOS S100A10
sentences:
- MALAT1 TMSB10 A2M FABP5 PTMA VIM ACTB CAV1 SPARCL1 CD74 EEF1A1 KLF2 IFITM3 CLDN5
TMSB4X TPT1 ENPP2 TM4SF1 FOS EIF1 S100A6 CALM1 CD81 HES1 SRGN ID1 GNG11 IGFBP4
STOM GSN TAGLN2 IGFBP7 CD320 FTH1 MCAM HSP90AA1 GNAS MYL6 TIMP3 EPAS1 TNFSF10
PODXL ITM2B SRP14 UBC TGFBR2 KCTD12 GIMAP7 UBA52 RHOA CD59 FTL PCSK5 MYH9 MYL12A
FLT1 CXCL12 LIFR TUBA1B DSTN ARPC1B JUND H3-3B TMBIM6
- This measurement was conducted with 10x 3' v3. Fibroblasts derived from the terminal
ileum of a female individual in her fourth decade, exhibiting Crohn's disease
(CD) related changes.
- This measurement was conducted with 10x 3' v3. Glial cells derived from the ileal
epithelium of a female in her fourth decade.
- source_sentence: MALAT1 DCN MGP APOD GSN LAMA2 CST3 SPARCL1 IGFBP7 TIMP1 VIM EEF1A1
ITM2B FBLN1 C3 IFITM3 FBN1 FTH1 TPT1 ABCA8 C1S TXNIP FTL TIMP3 FN1 CD63 RBMS3
ABCA6 ZBTB20 CEBPD NEAT1 CFH VCAN PTN PTGDS CD81 SERF2 COL6A1 COL6A2 ABI3BP ABCA10
EBF1 COL1A2 PRKG1 S100A6 MGST1 TMSB10 TIMP2 CELF2 LAPTM4A RORA ACTB LTBP4 MYL6
LGALS1 DDX5 SPTBN1 EFEMP1 BICC1 LRP1 H3-3B SCN7A IGFBP4 FAU
sentences:
- This measurement was conducted with 10x 3' v3. CD4+T naive lymphocyte cells derived
from the right cardiac atrium of a European male in his sixties.
- This measurement was conducted with 10x multiome. Fibroblast cell sample taken
from the right ventricle of a European female donor in her fifth decade, who is
a DCD donor. The sample is in nucleus form.
- MALAT1 NEAT1 LINC00486 SLC8A1 VMP1 SAT1 PIK3R5 DIRC3 FMN1 PMP22 RBM47 AGFG1 DIP2B
RBMS1 GNAQ TBC1D14 RAB1A ARHGAP24 DAPK1 SLC1A3 RHOQ SH3BGRL DOCK10 SLCO2B1 RUNX1
ENOX2 LDLRAD4 RNF150 PIAS1 DDX5 WSB1 TSHZ3 SBF2 DOCK2 LRP4 DENND4C FCHSD2 EXOC6B
AFF3 ARHGAP26 DIAPH2 MGAT5 TMEM163 NSMCE2 RBPJ ZEB2 TANC2 BPTF SH3RF3 MFSD14CP
TCF4 RORA-AS1 NOP58 MEF2A EPN2 PICALM ARHGAP15 MEF2C ANKRD12 FCGRT DOCK8 SETX
TBC1D9 KLHL2
datasets:
- jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation
- jo-mengr/descriptions_genes
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy
model-index:
- name: SentenceTransformer based on Qwen/Qwen3-Embedding-0.6B
results:
- task:
type: triplet
name: Triplet
dataset:
name: cellxgene pseudo bulk 100k multiplets natural language annotation cell
sentence 2
type: cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_2
metrics:
- type: cosine_accuracy
value: 0.8204416632652283
name: Cosine Accuracy
- task:
type: triplet
name: Triplet
dataset:
name: gene description
type: gene_description
metrics:
- type: cosine_accuracy
value: 0.9559999704360962
name: Cosine Accuracy
---
# SentenceTransformer based on Qwen/Qwen3-Embedding-0.6B
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) on the [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) and [gene_description](https://huggingface.co/datasets/jo-mengr/descriptions_genes) datasets. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Qwen/Qwen3-Embedding-0.6B](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) <!-- at revision c54f2e6e80b2d7b7de06f51cec4959f6b3e03418 -->
- **Maximum Sequence Length:** 32768 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Datasets:**
- [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation)
- [gene_description](https://huggingface.co/datasets/jo-mengr/descriptions_genes)
- **Language:** code
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): MMContextEncoder(
(text_encoder): Qwen3Model(
(embed_tokens): Embedding(151669, 1024)
(layers): ModuleList(
(0-27): 28 x Qwen3DecoderLayer(
(self_attn): Qwen3Attention(
(q_proj): Linear(in_features=1024, out_features=2048, bias=False)
(k_proj): Linear(in_features=1024, out_features=1024, bias=False)
(v_proj): Linear(in_features=1024, out_features=1024, bias=False)
(o_proj): Linear(in_features=2048, out_features=1024, bias=False)
(q_norm): Qwen3RMSNorm((128,), eps=1e-06)
(k_norm): Qwen3RMSNorm((128,), eps=1e-06)
)
(mlp): Qwen3MLP(
(gate_proj): Linear(in_features=1024, out_features=3072, bias=False)
(up_proj): Linear(in_features=1024, out_features=3072, bias=False)
(down_proj): Linear(in_features=3072, out_features=1024, bias=False)
(act_fn): SiLU()
)
(input_layernorm): Qwen3RMSNorm((1024,), eps=1e-06)
(post_attention_layernorm): Qwen3RMSNorm((1024,), eps=1e-06)
)
)
(norm): Qwen3RMSNorm((1024,), eps=1e-06)
(rotary_emb): Qwen3RotaryEmbedding()
)
(pooling): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("jo-mengr/mmcontext-qwen-scvi_fm")
# Run inference
sentences = [
'MALAT1 DCN MGP APOD GSN LAMA2 CST3 SPARCL1 IGFBP7 TIMP1 VIM EEF1A1 ITM2B FBLN1 C3 IFITM3 FBN1 FTH1 TPT1 ABCA8 C1S TXNIP FTL TIMP3 FN1 CD63 RBMS3 ABCA6 ZBTB20 CEBPD NEAT1 CFH VCAN PTN PTGDS CD81 SERF2 COL6A1 COL6A2 ABI3BP ABCA10 EBF1 COL1A2 PRKG1 S100A6 MGST1 TMSB10 TIMP2 CELF2 LAPTM4A RORA ACTB LTBP4 MYL6 LGALS1 DDX5 SPTBN1 EFEMP1 BICC1 LRP1 H3-3B SCN7A IGFBP4 FAU',
'This measurement was conducted with 10x multiome. Fibroblast cell sample taken from the right ventricle of a European female donor in her fifth decade, who is a DCD donor. The sample is in nucleus form.',
"This measurement was conducted with 10x 3' v3. CD4+T naive lymphocyte cells derived from the right cardiac atrium of a European male in his sixties.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.6280, 0.0951],
# [0.6280, 1.0000, 0.2002],
# [0.0951, 0.2002, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Triplet
* Datasets: `cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_2` and `gene_description`
* Evaluated with [<code>TripletEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.TripletEvaluator)
| Metric | cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_2 | gene_description |
|:--------------------|:----------------------------------------------------------------------------------|:-----------------|
| **cosine_accuracy** | **0.8204** | **0.956** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Datasets
#### cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation
* Dataset: [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) at [d518eb2](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation/tree/d518eb24af305653b43acd9e26f9502632059e7c)
* Size: 81,143 training samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 | negative_2 |
|:--------|:--------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|
| type | string | string | string | string |
| details | <ul><li>min: 356 characters</li><li>mean: 385.24 characters</li><li>max: 450 characters</li></ul> | <ul><li>min: 92 characters</li><li>mean: 216.13 characters</li><li>max: 900 characters</li></ul> | <ul><li>min: 103 characters</li><li>mean: 212.72 characters</li><li>max: 1186 characters</li></ul> | <ul><li>min: 353 characters</li><li>mean: 384.82 characters</li><li>max: 433 characters</li></ul> |
* Samples:
| anchor | positive | negative_1 | negative_2 |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>TMSB4X TMSB10 ACTB MALAT1 GNLY NKG7 IFITM2 LGALS1 GZMA EEF1A1 PFN1 HMGB2 FTH1 PTMA HSP90AA1 GZMB ARHGDIB HNRNPA2B1 PLAAT4 FAU CMC1 VIM MYL12A CBX3 ATP5F1E HCST IFI44L KLRF1 H3-3A COX6C ARL6IP1 CFL1 ISG15 HMGB1 S100A4 ATP5MF RORA MYL6 CORO1A OAZ1 KLRB1 ID2 HMGN3 CCNI RBM39 CAP1 SERF2 ELOC FCER1G S100A9 IFI16 YWHAZ EIF1 CALR HMGN2 SKAP2 SLC25A5 ZZZ3 YBX1 NUCB2 CDC42 GSTP1 FTL ATP5F1D</code> | <code>This measurement was conducted with 10x 3' v2. A proliferating lymphocyte cell sample, obtained from a 34-year-old female Asian individual, derived from peripheral blood mononuclear cells.</code> | <code>This measurement was conducted with 10x 3' v2. Sample is a CD8-positive, alpha-beta T cell derived from a 31-year-old Asian female's peripheral blood mononuclear cells.</code> | <code>MALAT1 TMSB4X EEF1A1 TMSB10 FAU TPT1 PTMA EIF1 UBA52 ACTB FTH1 RACK1 FTL H3-3B JUNB ATP5F1E BTG1 CD52 NACA MYL12A PFN1 COX7C COX4I1 SERF2 UQCRB TOMM7 IL32 YBX1 PABPC1 MYL6 EIF3E OAZ1 NOP53 ARHGDIB LDHB HCST SARAF ITM2B ATP6V1G1 SRP14 UBC H3-3A COX6C HINT1 UBB COMMD6 S100A4 S100A6 CALM1 VIM CYBA ENO1 HSP90AA1 FXYD5 HSP90AB1 CIRBP SRSF5 NFKBIA CORO1A LEPROTL1 TLE5 CHCHD2 DDX5 CD69</code> |
| <code>EEF1A1 MALAT1 FTH1 JUNB TPT1 FOS TMSB10 BTG1 TMSB4X ZFP36L2 NACA PABPC1 ACTB FAU VIM H3-3B EIF1 ZFP36 SARAF PTMA IL7R JUN RACK1 EEF2 UBA52 GAPDH FTL FXYD5 DUSP1 S100A4 CD69 CXCR4 UBC TSC22D3 CFL1 KLF6 ARHGDIB KLF2 BTG2 CITED2 IER2 TUBB4B CD3E EEF1G SLC2A3 NFKBIA PFN1 SRGN SNX9 COX4I1 DNAJB1 SERF2 CD8A PCBP2 IL32 BIRC3 SMAP2 FUS GADD45B MYL12A OAZ1 ATP5F1E TUBA4A PNRC1</code> | <code>This measurement was conducted with 10x 5' v1. Sample is a cell from the omentum tissue, specifically an effector memory CD4-positive, alpha-beta T cell, from a female in her sixth decade.</code> | <code>This measurement was conducted with 10x 5' v1. Sample is a CD4-positive helper T cell, specifically Trm_Th1/Th17 subset, derived from the duodenum tissue of a male individual in his sixth decade.</code> | <code>MALAT1 TPT1 EEF1A1 VIM JUND TMSB4X PTMA FTH1 CRIP1 ANXA1 EIF1 UBC H3-3B ACTB SRGN FTL FAU KLF6 IL7R CALM1 UBA52 BTG1 SARAF IL32 TMSB10 PABPC1 HSP90AB1 DDX5 GAPDH TAGLN2 NACA CD44 HSPA5 RORA HSP90AA1 KLRB1 TNFAIP3 ATP5F1E PNRC1 ZFP36L2 H3-3A UBB FOS RACK1 FYN FAM107B GNAS EZR MYL6 CREM NFKBIA PFN1 ARHGDIB SRSF7 CD2 CCNI HNRNPA2B1 COX7C ITM2B SERF2 SH3BGRL3 TSC22D3 LMNA YWHAZ</code> |
| <code>MALAT1 GRIK1 SYT1 PCDH9 RORA NRG1 CADPS ZFPM2 LRRC4C LINGO2 RALYL PTPRD SPHKAP CNTNAP5 SLC8A1 CCSER1 HDAC9 CELF2 R3HDM1 CNTN4 RBMS3 PCDH7 GALNT13 UNC5D ROBO1 SYNPR SNAP25 GPM6A ANK3 FRMPD4 CHRM2 RYR2 KHDRBS2 CADM1 CACNA1D RGS6 PDE4D DOCK4 UNC13C CDH18 FAT3 MEG3 NR2F2-AS1 HMCN1 GULP1 CAMK2D ZEB1 SYN2 DYNC1I1 OXR1 DPP10 OSBPL6 FRAS1 PPP3CA ZNF385D ZMAT4 PCBP3 HS6ST3 ERC2 PLEKHA5 CDK14 MAP2 NCOA1 ATP8A2</code> | <code>This measurement was conducted with 10x 3' v3. Neuron cell type from a 29-year-old male, specifically from the thalamic complex, specifically the thalamus (THM) - posterior nuclear complex of thalamus (PoN) - medial geniculate nuclei (MG).</code> | <code>This measurement was conducted with 10x 3' v3. Astrocyte cell type from the thalamic complex, specifically from the thalamus (THM) - posterior nuclear complex of thalamus (PoN) - medial geniculate nuclei (MG) region, of a 42-year-old male.</code> | <code>MALAT1 PCDH9 PLP1 MBP ST18 QKI PDE4B RNF220 PTPRD SEPTIN7 TTLL7 NCKAP5 GPM6B PIP4K2A MOBP SLC44A1 PTGDS PLCL1 MAP7 ELMO1 SIK3 FTH1 ZBTB20 MAN2A1 TMEM165 DOCK10 TCF12 EDIL3 ZEB2 DPYD MAP4K4 PHLPP1 TF GAB1 TRIM2 FRMD4B DNAJC6 MARCHF1 ANK3 DST AGAP1 TMEM144 NEAT1 PLEKHH1 DLG1 CRYAB ERBIN RTN4 SPP1 ATP8A1 DOCK4 SLAIN1 APP DOCK5 APBB2 SAMD12 SHTN1 ZNF536 ZFYVE16 ARAP2 LIMCH1 HIPK2 BCAS1 FAM107B</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### gene_description
* Dataset: [gene_description](https://huggingface.co/datasets/jo-mengr/descriptions_genes) at [dd22363](https://huggingface.co/datasets/jo-mengr/descriptions_genes/tree/dd22363de0a7c501f41ba324fb3b8d6ecdd14dc7)
* Size: 116,208 training samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative_1</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 |
|:--------|:---------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 characters</li><li>mean: 5.88 characters</li><li>max: 12 characters</li></ul> | <ul><li>min: 16 characters</li><li>mean: 367.09 characters</li><li>max: 1375 characters</li></ul> | <ul><li>min: 13 characters</li><li>mean: 167.33 characters</li><li>max: 1375 characters</li></ul> |
* Samples:
| anchor | positive | negative_1 |
|:------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
| <code>A1BG</code> | <code>The protein encoded by this gene is a plasma glycoprotein of unknown function. The protein shows sequence similarity to the variable regions of some immunoglobulin supergene family member proteins. [provided by RefSeq, Jul 2008]</code> | <code>A1BG antisense RNA 1</code> |
| <code>A1BG</code> | <code>The protein encoded by this gene is a plasma glycoprotein of unknown function. The protein shows sequence similarity to the variable regions of some immunoglobulin supergene family member proteins. [provided by RefSeq, Jul 2008]</code> | <code>G antigen 12D</code> |
| <code>A1BG</code> | <code>The protein encoded by this gene is a plasma glycoprotein of unknown function. The protein shows sequence similarity to the variable regions of some immunoglobulin supergene family member proteins. [provided by RefSeq, Jul 2008]</code> | <code>G antigen 12B</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Datasets
#### cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation
* Dataset: [cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation) at [d518eb2](https://huggingface.co/datasets/jo-mengr/cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation/tree/d518eb24af305653b43acd9e26f9502632059e7c)
* Size: 9,011 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, <code>negative_1</code>, and <code>negative_2</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 | negative_2 |
|:--------|:-------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|
| type | string | string | string | string |
| details | <ul><li>min: 347 characters</li><li>mean: 386.7 characters</li><li>max: 437 characters</li></ul> | <ul><li>min: 99 characters</li><li>mean: 209.99 characters</li><li>max: 941 characters</li></ul> | <ul><li>min: 101 characters</li><li>mean: 208.8 characters</li><li>max: 728 characters</li></ul> | <ul><li>min: 356 characters</li><li>mean: 386.56 characters</li><li>max: 434 characters</li></ul> |
* Samples:
| anchor | positive | negative_1 | negative_2 |
|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>MALAT1 EEF1A1 FTH1 TMSB4X ACTB FTL RTN4 ATP6V0B TPT1 FAU S100A6 NDUFA4 ATP5F1E COX7C ITM2B IGFBP7 EIF1 C12orf75 CD9 COX7B SERF2 ATP1B1 COX8A TXNIP NDUFB2 MYL6 PPDPF COX6B1 UQCR11 APOE COX4I1 CALM2 UQCRB S100A11 UQCRQ COX6C ATP5MG BSG ATP6AP2 UQCR10 PTMA NACA UBL5 UBA52 TMSB10 ADGRF5 HSP90AA1 GSTP1 ATP5F1D CHCHD2 GAPDH COX7A2 SKP1 HSPE1 PRDX1 CYSTM1 LGALS3 CD63 ATP5MJ CKB NDUFS5 ATP5ME UBB MAL</code> | <code>This measurement was conducted with 10x 3' v3. Cell sample from the cortex of kidney, taken from a 43-year-old male of European ethnicity with a reported history of kidney cancer. The cell type is identified as a kidney collecting duct intercalated cell.</code> | <code>This measurement was conducted with 10x 3' v3. Cell sample from the cortex of kidney, taken from a 72-year-old male of European ethnicity, identified as a kidney collecting duct intercalated cell, and preserved through cryopreservation.</code> | <code>MALAT1 TMSB4X TMSB10 ACTB TXNIP EEF1A1 TPT1 PFN1 BTG1 FAU PTMA S100A4 ATP5F1E EIF1 FTL CFL1 CYBA MYL12A SRGN SERF2 SH3BGRL3 CALM1 TYROBP MYL6 ZFP36 KLRD1 UBB NACA S100A6 UBA52 HSP90AA1 H3-3B LCP1 FTH1 DDIT4 FOS PPIA CD247 RACK1 TMA7 CORO1A OAZ1 TLE5 ARPC3 GAPDH KLF2 UBC ZFP36L2 TSC22D3 ITGB2 ARPC2 ATP5MG HOPX IFITM2 HMGB1 OST4 EEF1G PRDM1 CDC42 GSTP1 NDUFB2 CIRBP LGALS1 CHCHD2</code> |
| <code>MALAT1 KCND2 NRXN1 CDH18 NRXN3 ZNF385D CADM2 RALYL NKAIN2 CADPS2 RIMS1 FSTL5 GRID2 TRPM3 CHN2 DPP6 JMJD1C RORA PDE1A UNC13C TIAM1 NRG1 SNAP25 ZFPM2 CALN1 LSAMP CNTN1 ABLIM1 SYNE1 ANK3 CA10 NFIA ZBTB20 NTM CADM1 OPCML RELN DNM3 NEBL ERC1 SCN2A PPP3CA CACNA1A GALNT13 LRRC4C GPM6A RABGAP1L RIT2 CAMK4 GRIA4 PTPRD RBFOX3 MCTP1 LHFPL6 PCLO MEG3 PDE10A NOVA1 RTN1 ZNF385B CNTN4 GABRB2 SPOCK1 OXR1</code> | <code>This measurement was conducted with 10x 3' v3. Neuron cell type from a 29-year-old male cerebellum, specifically from the Cerebellar Vermis - CBV region, with European self-reported ethnicity, analyzed at the nucleus level.</code> | <code>This measurement was conducted with 10x 3' v3. Sample is an oligodendrocyte precursor cell taken from the cerebellum tissue of a 42-year-old human male, specifically from the Cerebellum (CB) - Cerebellar Vermis - CBV dissection.</code> | <code>MALAT1 NRXN3 SNTG1 UNC5C GRIA4 NRG1 RORA INPP4B CLSTN2 NKAIN2 FRMD4A DPP6 GRID2 NRXN1 LSAMP JMJD1C HS6ST3 NXPH1 MIR99AHG LRRC4C NTM CCNH NFIA ZFPM2 AFF3 OPCML PTPRT CADM2 ZBTB20 OLFM3 SLC22A3 CNTNAP5 CACNA2D3 CNTN4 KCND2 ADARB2 XKR4 GPM6A IL1RAPL1 ALK ANKRD36C UBE2E2 SYN3 GARNL3 PTPRG DAB1 TCF4 LINC00461 PRANCR GRIN2B TNRC6B MAPK10 NOVA1 NFIB ANK3 KCNMA1 KCNQ5 SPON1 TRIM9 VWA8 GDAP1 GABRG2 AHI1 ATP1B1</code> |
| <code>EEF1A1 ACTB GAPDH HMGN2 PTMA SERF2 TMSB4X CD74 PABPC1 FTH1 TMSB10 FAU PFN1 HMGN1 OAZ1 HMGB1 TPT1 PPIA NACA BTF3 MALAT1 MYL6 ATP5MG CFL1 RACK1 ODC1 ATP5F1E TMA7 SLC25A5 ELOB ARPC3 NPM1 COX7C ANP32B C4orf3 EIF1 PCBP2 KLF6 LAPTM5 COX8A RHOA HSPA8 H3-3B PTP4A2 UBA52 OST4 CIRBP LGALS1 EIF3L STMN1 PPDPF COX4I1 RAN EIF3F PPP1CC COMMD6 NDUFA4 YBX1 PEBP1 COTL1 COX7A2 HSPE1 CCNI TRIR</code> | <code>This measurement was conducted with 10x 5' v1. Cell sample from the tonsil of a 9-year-old female with recurrent tonsillitis, characterized as a centroblast B cell with IGLC2, IGLV7-43, IGLJ3 immunoglobulin genes expressed.</code> | <code>This measurement was conducted with 10x 5' v1. Germinal center B cell derived from the tonsil tissue of a 3-year-old male with recurrent tonsillitis.</code> | <code>CD74 MALAT1 EEF1A1 SSR4 TPT1 UBC EEF2 SAT1 RACK1 SEC11C ATP5MG FAU TSC22D3 PPIB XBP1 FTL GAPDH HLA-DRB5 HERPUD1 RGS2 HSPA8 TMSB4X HSP90B1 EIF1 PTMA SERP1 SERF2 NACA SEC61B GSTP1 UBA52 HSPA5 BTF3 LAPTM5 HSPE1 H3-3B ATP5F1A SEC61G CD38 EDF1 FTH1 IL16 NPM1 OST4 CIRBP EIF3E OAZ1 CYTIP PCBP2 MYDGF COX6B1 ZFP36 CSDE1 PABPC1 REXO2 KDELR1 PFN1 PTP4A1 TMBIM6 H1-10 PSAP UBE2J1 VIM MYL6</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
#### gene_description
* Dataset: [gene_description](https://huggingface.co/datasets/jo-mengr/descriptions_genes) at [dd22363](https://huggingface.co/datasets/jo-mengr/descriptions_genes/tree/dd22363de0a7c501f41ba324fb3b8d6ecdd14dc7)
* Size: 1,000 evaluation samples
* Columns: <code>anchor</code>, <code>positive</code>, and <code>negative_1</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive | negative_1 |
|:--------|:---------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 3 characters</li><li>mean: 5.88 characters</li><li>max: 12 characters</li></ul> | <ul><li>min: 16 characters</li><li>mean: 367.09 characters</li><li>max: 1375 characters</li></ul> | <ul><li>min: 13 characters</li><li>mean: 167.33 characters</li><li>max: 1375 characters</li></ul> |
* Samples:
| anchor | positive | negative_1 |
|:------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------|
| <code>A1BG</code> | <code>The protein encoded by this gene is a plasma glycoprotein of unknown function. The protein shows sequence similarity to the variable regions of some immunoglobulin supergene family member proteins. [provided by RefSeq, Jul 2008]</code> | <code>A1BG antisense RNA 1</code> |
| <code>A1BG</code> | <code>The protein encoded by this gene is a plasma glycoprotein of unknown function. The protein shows sequence similarity to the variable regions of some immunoglobulin supergene family member proteins. [provided by RefSeq, Jul 2008]</code> | <code>G antigen 12D</code> |
| <code>A1BG</code> | <code>The protein encoded by this gene is a plasma glycoprotein of unknown function. The protein shows sequence similarity to the variable regions of some immunoglobulin supergene family member proteins. [provided by RefSeq, Jul 2008]</code> | <code>G antigen 12B</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `learning_rate`: 2e-05
- `num_train_epochs`: 4
- `warmup_ratio`: 0.1
- `bf16`: True
- `gradient_checkpointing`: True
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 128
- `per_device_eval_batch_size`: 128
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 4
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: True
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | cellxgene pseudo bulk 100k multiplets natural language annotation loss | gene description loss | cellxgene_pseudo_bulk_100k_multiplets_natural_language_annotation_cell_sentence_2_cosine_accuracy | gene_description_cosine_accuracy |
|:------:|:----:|:-------------:|:----------------------------------------------------------------------:|:---------------------:|:-------------------------------------------------------------------------------------------------:|:--------------------------------:|
| 0.0324 | 50 | 9.3314 | 12.6479 | 6.6616 | 0.5052 | 0.2570 |
| 0.0649 | 100 | 7.9528 | 10.8869 | 6.0596 | 0.5078 | 0.2660 |
| 0.0973 | 150 | 7.0084 | 7.0423 | 5.4704 | 0.5075 | 0.3020 |
| 0.1297 | 200 | 5.6925 | 6.0263 | 5.2950 | 0.5024 | 0.5200 |
| 0.1621 | 250 | 5.381 | 5.8141 | 4.7323 | 0.5367 | 0.6520 |
| 0.1946 | 300 | 4.3736 | 5.4432 | 4.3565 | 0.5518 | 0.7060 |
| 0.2270 | 350 | 3.8184 | 5.1966 | 4.1283 | 0.5836 | 0.7690 |
| 0.2594 | 400 | 3.6181 | 5.0588 | 3.9594 | 0.6064 | 0.7650 |
| 0.2918 | 450 | 3.1076 | 4.9406 | 3.7824 | 0.6218 | 0.8030 |
| 0.3243 | 500 | 3.127 | 4.8376 | 3.6785 | 0.6369 | 0.8230 |
| 0.3567 | 550 | 3.1702 | 4.8230 | 3.6029 | 0.6532 | 0.8410 |
| 0.3891 | 600 | 2.992 | 5.1160 | 3.6091 | 0.6240 | 0.8310 |
| 0.4215 | 650 | 2.606 | 4.5652 | 3.5555 | 0.6679 | 0.8490 |
| 0.4540 | 700 | 2.9473 | 4.5831 | 3.5215 | 0.6846 | 0.8600 |
| 0.4864 | 750 | 2.369 | 4.4464 | 3.4824 | 0.6930 | 0.8800 |
| 0.5188 | 800 | 2.5923 | 4.4542 | 3.4372 | 0.6983 | 0.8820 |
| 0.5512 | 850 | 2.9167 | 4.4572 | 3.4915 | 0.6984 | 0.8730 |
| 0.5837 | 900 | 2.5716 | 4.2259 | 3.4390 | 0.7126 | 0.8630 |
| 0.6161 | 950 | 2.375 | 4.2200 | 3.4250 | 0.7143 | 0.8740 |
| 0.6485 | 1000 | 2.4105 | 4.2001 | 3.3524 | 0.7187 | 0.8890 |
| 0.6809 | 1050 | 2.4014 | 4.0744 | 3.2688 | 0.7243 | 0.8950 |
| 0.7134 | 1100 | 2.7474 | 4.1131 | 3.3046 | 0.7270 | 0.8850 |
| 0.7458 | 1150 | 2.1615 | 4.2206 | 3.2392 | 0.7202 | 0.8860 |
| 0.7782 | 1200 | 2.4409 | 4.4682 | 3.1664 | 0.7106 | 0.8870 |
| 0.8106 | 1250 | 2.5041 | 4.0881 | 3.1417 | 0.7277 | 0.9030 |
| 0.8431 | 1300 | 2.4221 | 3.8777 | 3.2302 | 0.7409 | 0.8940 |
| 0.8755 | 1350 | 2.189 | 3.8482 | 3.1316 | 0.7441 | 0.9050 |
| 0.9079 | 1400 | 2.3055 | 3.8571 | 3.1550 | 0.7451 | 0.9030 |
| 0.9403 | 1450 | 2.0945 | 3.8233 | 3.1269 | 0.7530 | 0.9020 |
| 0.9728 | 1500 | 2.0217 | 3.7722 | 3.0707 | 0.7527 | 0.9070 |
| 1.0052 | 1550 | 2.2443 | 3.8285 | 3.0799 | 0.7459 | 0.9190 |
| 1.0376 | 1600 | 1.9441 | 3.8292 | 3.0957 | 0.7470 | 0.9090 |
| 1.0700 | 1650 | 1.8771 | 3.6837 | 3.0190 | 0.7555 | 0.9290 |
| 1.1025 | 1700 | 1.9489 | 3.6946 | 3.0298 | 0.7570 | 0.9210 |
| 1.1349 | 1750 | 2.0622 | 3.7221 | 3.0001 | 0.7574 | 0.9140 |
| 1.1673 | 1800 | 1.7275 | 3.7806 | 2.9919 | 0.7530 | 0.9090 |
| 1.1997 | 1850 | 2.0068 | 3.6648 | 2.9490 | 0.7584 | 0.9230 |
| 1.2322 | 1900 | 1.9126 | 3.7416 | 2.9131 | 0.7603 | 0.9160 |
| 1.2646 | 1950 | 1.9513 | 3.5770 | 2.9362 | 0.7625 | 0.9230 |
| 1.2970 | 2000 | 1.8021 | 3.6660 | 2.8868 | 0.7670 | 0.9360 |
| 1.3294 | 2050 | 1.9685 | 3.7318 | 2.8669 | 0.7587 | 0.9390 |
| 1.3619 | 2100 | 1.7835 | 3.5471 | 2.8356 | 0.7712 | 0.9350 |
| 1.3943 | 2150 | 1.826 | 3.5666 | 2.7893 | 0.7707 | 0.9340 |
| 1.4267 | 2200 | 1.9708 | 3.5630 | 2.7570 | 0.7741 | 0.9290 |
| 1.4591 | 2250 | 2.0131 | 3.5586 | 2.8239 | 0.7742 | 0.9360 |
| 1.4916 | 2300 | 1.856 | 3.5155 | 2.7658 | 0.7779 | 0.9410 |
| 1.5240 | 2350 | 1.9354 | 3.7959 | 2.7921 | 0.7622 | 0.9380 |
| 1.5564 | 2400 | 1.8961 | 3.5166 | 2.7456 | 0.7790 | 0.9430 |
| 1.5888 | 2450 | 1.6347 | 3.4784 | 2.7911 | 0.7800 | 0.9470 |
| 1.6213 | 2500 | 1.9176 | 3.4388 | 2.7349 | 0.7829 | 0.9440 |
| 1.6537 | 2550 | 2.0475 | 3.6968 | 2.7456 | 0.7754 | 0.9390 |
| 1.6861 | 2600 | 1.7946 | 3.4758 | 2.7046 | 0.7848 | 0.9470 |
| 1.7185 | 2650 | 1.9581 | 3.3828 | 2.7022 | 0.7867 | 0.9430 |
| 1.7510 | 2700 | 1.8475 | 3.3631 | 2.6706 | 0.7903 | 0.9470 |
| 1.7834 | 2750 | 1.836 | 3.5622 | 2.6512 | 0.7857 | 0.9450 |
| 1.8158 | 2800 | 2.051 | 3.3523 | 2.6542 | 0.7926 | 0.9390 |
| 1.8482 | 2850 | 1.829 | 3.3676 | 2.6730 | 0.7925 | 0.9390 |
| 1.8807 | 2900 | 1.7557 | 3.3632 | 2.6536 | 0.7954 | 0.9470 |
| 1.9131 | 2950 | 1.7725 | 3.3448 | 2.6437 | 0.7946 | 0.9470 |
| 1.9455 | 3000 | 1.7373 | 3.2736 | 2.6562 | 0.7987 | 0.9440 |
| 1.9780 | 3050 | 1.886 | 3.3404 | 2.6456 | 0.7958 | 0.9450 |
| 2.0104 | 3100 | 1.7217 | 3.2570 | 2.6893 | 0.7988 | 0.9400 |
| 2.0428 | 3150 | 1.6235 | 3.2331 | 2.6132 | 0.8004 | 0.9430 |
| 2.0752 | 3200 | 1.6678 | 3.2466 | 2.5904 | 0.8030 | 0.9470 |
| 2.1077 | 3250 | 1.6784 | 3.2339 | 2.5956 | 0.8008 | 0.9480 |
| 2.1401 | 3300 | 1.8422 | 3.2286 | 2.5997 | 0.8039 | 0.9480 |
| 2.1725 | 3350 | 1.4859 | 3.2163 | 2.5924 | 0.8049 | 0.9470 |
| 2.2049 | 3400 | 1.6165 | 3.3246 | 2.6167 | 0.7989 | 0.9440 |
| 2.2374 | 3450 | 1.65 | 3.2184 | 2.5864 | 0.8039 | 0.9460 |
| 2.2698 | 3500 | 1.5071 | 3.2274 | 2.5788 | 0.8019 | 0.9460 |
| 2.3022 | 3550 | 1.5238 | 3.2032 | 2.5608 | 0.8075 | 0.9480 |
| 2.3346 | 3600 | 1.568 | 3.2409 | 2.5649 | 0.8081 | 0.9470 |
| 2.3671 | 3650 | 1.4644 | 3.1937 | 2.5841 | 0.8079 | 0.9430 |
| 2.3995 | 3700 | 1.5782 | 3.2033 | 2.5909 | 0.8065 | 0.9450 |
| 2.4319 | 3750 | 1.6976 | 3.1905 | 2.5690 | 0.8073 | 0.9470 |
| 2.4643 | 3800 | 1.4682 | 3.2078 | 2.5610 | 0.8052 | 0.9490 |
| 2.4968 | 3850 | 1.7414 | 3.1822 | 2.5650 | 0.8072 | 0.9500 |
| 2.5292 | 3900 | 1.654 | 3.1890 | 2.5566 | 0.8110 | 0.9490 |
| 2.5616 | 3950 | 1.5187 | 3.1843 | 2.5508 | 0.8090 | 0.9470 |
| 2.5940 | 4000 | 1.4893 | 3.1855 | 2.5527 | 0.8067 | 0.9470 |
| 2.6265 | 4050 | 1.6716 | 3.1520 | 2.5432 | 0.8093 | 0.9480 |
| 2.6589 | 4100 | 1.4914 | 3.1868 | 2.5466 | 0.8099 | 0.9500 |
| 2.6913 | 4150 | 1.6231 | 3.1702 | 2.5235 | 0.8112 | 0.9500 |
| 2.7237 | 4200 | 1.6058 | 3.1561 | 2.5171 | 0.8096 | 0.9520 |
| 2.7562 | 4250 | 1.5753 | 3.1660 | 2.5068 | 0.8111 | 0.9530 |
| 2.7886 | 4300 | 1.4654 | 3.1507 | 2.5156 | 0.8138 | 0.9510 |
| 2.8210 | 4350 | 1.5901 | 3.1960 | 2.4917 | 0.8115 | 0.9540 |
| 2.8534 | 4400 | 1.5034 | 3.1491 | 2.4960 | 0.8116 | 0.9550 |
| 2.8859 | 4450 | 1.4088 | 3.1505 | 2.5086 | 0.8133 | 0.9530 |
| 2.9183 | 4500 | 1.5527 | 3.1671 | 2.5154 | 0.8112 | 0.9540 |
| 2.9507 | 4550 | 1.5344 | 3.1329 | 2.5016 | 0.8141 | 0.9530 |
| 2.9831 | 4600 | 1.4156 | 3.1439 | 2.4858 | 0.8146 | 0.9550 |
| 3.0156 | 4650 | 1.8602 | 3.1056 | 2.4799 | 0.8163 | 0.9550 |
| 3.0480 | 4700 | 1.4472 | 3.1387 | 2.4539 | 0.8126 | 0.9540 |
| 3.0804 | 4750 | 1.3582 | 3.1220 | 2.4676 | 0.8159 | 0.9530 |
| 3.1128 | 4800 | 1.5408 | 3.1309 | 2.4722 | 0.8142 | 0.9540 |
| 3.1453 | 4850 | 1.3755 | 3.1227 | 2.4624 | 0.8171 | 0.9530 |
| 3.1777 | 4900 | 1.4571 | 3.1284 | 2.4410 | 0.8162 | 0.9560 |
| 3.2101 | 4950 | 1.5657 | 3.0882 | 2.4486 | 0.8167 | 0.9550 |
| 3.2425 | 5000 | 1.5325 | 3.0980 | 2.4339 | 0.8178 | 0.9540 |
| 3.2750 | 5050 | 1.4671 | 3.0961 | 2.4625 | 0.8169 | 0.9550 |
| 3.3074 | 5100 | 1.4808 | 3.1176 | 2.4578 | 0.8180 | 0.9550 |
| 3.3398 | 5150 | 1.4172 | 3.1338 | 2.4515 | 0.8168 | 0.9550 |
| 3.3722 | 5200 | 1.4953 | 3.1047 | 2.4425 | 0.8174 | 0.9540 |
| 3.4047 | 5250 | 1.6419 | 3.1081 | 2.4317 | 0.8180 | 0.9540 |
| 3.4371 | 5300 | 1.5425 | 3.0910 | 2.4481 | 0.8210 | 0.9560 |
| 3.4695 | 5350 | 1.5598 | 3.1049 | 2.4365 | 0.8198 | 0.9560 |
| 3.5019 | 5400 | 1.4086 | 3.1036 | 2.4352 | 0.8198 | 0.9550 |
| 3.5344 | 5450 | 1.6057 | 3.1076 | 2.4269 | 0.8197 | 0.9560 |
| 3.5668 | 5500 | 1.6735 | 3.0792 | 2.4291 | 0.8200 | 0.9550 |
| 3.5992 | 5550 | 1.401 | 3.0959 | 2.4364 | 0.8211 | 0.9550 |
| 3.6316 | 5600 | 1.2475 | 3.0909 | 2.4324 | 0.8202 | 0.9570 |
| 3.6641 | 5650 | 1.2495 | 3.0686 | 2.4148 | 0.8210 | 0.9550 |
| 3.6965 | 5700 | 1.4457 | 3.0837 | 2.4123 | 0.8197 | 0.9570 |
| 3.7289 | 5750 | 1.5794 | 3.0877 | 2.4171 | 0.8191 | 0.9560 |
| 3.7613 | 5800 | 1.5696 | 3.0936 | 2.4153 | 0.8186 | 0.9560 |
| 3.7938 | 5850 | 1.5947 | 3.0778 | 2.4173 | 0.8190 | 0.9560 |
| 3.8262 | 5900 | 1.4517 | 3.0760 | 2.4242 | 0.8202 | 0.9560 |
| 3.8586 | 5950 | 1.553 | 3.0897 | 2.4222 | 0.8188 | 0.9580 |
| 3.8911 | 6000 | 1.2109 | 3.0683 | 2.4233 | 0.8211 | 0.9550 |
| 3.9235 | 6050 | 1.4384 | 3.0756 | 2.4221 | 0.8208 | 0.9560 |
| 3.9559 | 6100 | 1.4945 | 3.0755 | 2.4179 | 0.8202 | 0.9560 |
| 3.9883 | 6150 | 1.4597 | 3.0686 | 2.4183 | 0.8204 | 0.9560 |
</details>
### Framework Versions
- Python: 3.11.6
- Sentence Transformers: 5.0.0
- Transformers: 4.55.0.dev0
- PyTorch: 2.5.1+cu121
- Accelerate: 1.9.0
- Datasets: 2.19.1
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
donoway/ARC-Challenge_Llama-3.2-1B-ltjg67d4
|
donoway
| 2025-08-19T07:02:25Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T06:51:25Z
|
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Challenge_Llama-3.2-1B-ltjg67d4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Challenge_Llama-3.2-1B-ltjg67d4
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1531
- Model Preparation Time: 0.0058
- Mdl: 1360.1299
- Accumulated Loss: 942.7702
- Correct Preds: 76.0
- Total Preds: 299.0
- Accuracy: 0.2542
- Correct Gen Preds: 51.0
- Gen Accuracy: 0.1706
- Correct Gen Preds 32: 4.0
- Correct Preds 32: 8.0
- Total Labels 32: 64.0
- Accuracy 32: 0.125
- Gen Accuracy 32: 0.0625
- Correct Gen Preds 33: 46.0
- Correct Preds 33: 64.0
- Total Labels 33: 73.0
- Accuracy 33: 0.8767
- Gen Accuracy 33: 0.6301
- Correct Gen Preds 34: 0.0
- Correct Preds 34: 0.0
- Total Labels 34: 78.0
- Accuracy 34: 0.0
- Gen Accuracy 34: 0.0
- Correct Gen Preds 35: 1.0
- Correct Preds 35: 4.0
- Total Labels 35: 83.0
- Accuracy 35: 0.0482
- Gen Accuracy 35: 0.0120
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 1.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.6389 | 0.0058 | 706.9523 | 490.0220 | 66.0 | 299.0 | 0.2207 | 66.0 | 0.2207 | 62.0 | 62.0 | 64.0 | 0.9688 | 0.9688 | 0.0 | 0.0 | 73.0 | 0.0 | 0.0 | 4.0 | 4.0 | 78.0 | 0.0513 | 0.0513 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.7744 | 1.0 | 1 | 1.6389 | 0.0058 | 706.9523 | 490.0220 | 66.0 | 299.0 | 0.2207 | 66.0 | 0.2207 | 62.0 | 62.0 | 64.0 | 0.9688 | 0.9688 | 0.0 | 0.0 | 73.0 | 0.0 | 0.0 | 4.0 | 4.0 | 78.0 | 0.0513 | 0.0513 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.7744 | 2.0 | 2 | 2.6648 | 0.0058 | 1149.5159 | 796.7837 | 73.0 | 299.0 | 0.2441 | 65.0 | 0.2174 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 65.0 | 73.0 | 73.0 | 1.0 | 0.8904 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.7096 | 3.0 | 3 | 2.4451 | 0.0058 | 1054.7160 | 731.0734 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.2496 | 4.0 | 4 | 3.1416 | 0.0058 | 1355.1840 | 939.3419 | 68.0 | 299.0 | 0.2274 | 33.0 | 0.1104 | 27.0 | 61.0 | 64.0 | 0.9531 | 0.4219 | 3.0 | 4.0 | 73.0 | 0.0548 | 0.0411 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 3.0 | 3.0 | 83.0 | 0.0361 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.4068 | 5.0 | 5 | 3.1531 | 0.0058 | 1360.1299 | 942.7702 | 76.0 | 299.0 | 0.2542 | 51.0 | 0.1706 | 4.0 | 8.0 | 64.0 | 0.125 | 0.0625 | 46.0 | 64.0 | 73.0 | 0.8767 | 0.6301 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 1.0 | 4.0 | 83.0 | 0.0482 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0099 | 6.0 | 6 | 4.9358 | 0.0058 | 2129.1246 | 1475.7967 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0007 | 7.0 | 7 | 6.3486 | 0.0058 | 2738.5789 | 1898.2383 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0001 | 8.0 | 8 | 7.3310 | 0.0058 | 3162.3601 | 2191.9810 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 9.0 | 9 | 8.0769 | 0.0058 | 3484.0958 | 2414.9912 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 10.0 | 10 | 8.6401 | 0.0058 | 3727.0643 | 2583.4041 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 11.0 | 11 | 9.0754 | 0.0058 | 3914.8341 | 2713.5562 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 12 | 9.3927 | 0.0058 | 4051.6855 | 2808.4144 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 13 | 9.6301 | 0.0058 | 4154.1028 | 2879.4047 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 14 | 9.8237 | 0.0058 | 4237.5889 | 2937.2728 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 15 | 9.9746 | 0.0058 | 4302.6847 | 2982.3938 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 16.0 | 16 | 10.0964 | 0.0058 | 4355.2479 | 3018.8278 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 17.0 | 17 | 10.2009 | 0.0058 | 4400.3308 | 3050.0769 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 18.0 | 18 | 10.2843 | 0.0058 | 4436.3068 | 3075.0135 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 19.0 | 19 | 10.3495 | 0.0058 | 4464.4380 | 3094.5126 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 20.0 | 20 | 10.3984 | 0.0058 | 4485.5353 | 3109.1362 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 21.0 | 21 | 10.4396 | 0.0058 | 4503.2981 | 3121.4484 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 22.0 | 22 | 10.4770 | 0.0058 | 4519.4377 | 3132.6355 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 23.0 | 23 | 10.5038 | 0.0058 | 4530.9789 | 3140.6353 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 24.0 | 24 | 10.5224 | 0.0058 | 4539.0044 | 3146.1981 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 25.0 | 25 | 10.5508 | 0.0058 | 4551.2647 | 3154.6963 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 26.0 | 26 | 10.5615 | 0.0058 | 4555.8643 | 3157.8845 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 27.0 | 27 | 10.5749 | 0.0058 | 4561.6341 | 3161.8838 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 28.0 | 28 | 10.5818 | 0.0058 | 4564.6095 | 3163.9462 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 29.0 | 29 | 10.5933 | 0.0058 | 4569.5683 | 3167.3834 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 30.0 | 30 | 10.5972 | 0.0058 | 4571.2822 | 3168.5714 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 31.0 | 31 | 10.6010 | 0.0058 | 4572.9054 | 3169.6965 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 32.0 | 32 | 10.6073 | 0.0058 | 4575.6091 | 3171.5706 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 33.0 | 33 | 10.6110 | 0.0058 | 4577.2322 | 3172.6956 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 34.0 | 34 | 10.6146 | 0.0058 | 4578.7654 | 3173.7584 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 35.0 | 35 | 10.6158 | 0.0058 | 4579.3059 | 3174.1330 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
aiface/ModernBERT-large_nli
|
aiface
| 2025-08-19T07:02:21Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"modernbert",
"text-classification",
"generated_from_trainer",
"base_model:answerdotai/ModernBERT-large",
"base_model:finetune:answerdotai/ModernBERT-large",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T03:22:37Z
|
---
library_name: transformers
license: apache-2.0
base_model: answerdotai/ModernBERT-large
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ModernBERT-large_nli
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ModernBERT-large_nli
This model is a fine-tuned version of [answerdotai/ModernBERT-large](https://huggingface.co/answerdotai/ModernBERT-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6038
- Accuracy: 0.5787
- Precision Macro: 0.5794
- Recall Macro: 0.5790
- F1 Macro: 0.5792
- F1 Weighted: 0.5788
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision Macro | Recall Macro | F1 Macro | F1 Weighted |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------------:|:------------:|:--------:|:-----------:|
| 2.1283 | 1.0 | 143 | 1.0136 | 0.4807 | 0.4674 | 0.4835 | 0.4509 | 0.4492 |
| 1.8848 | 2.0 | 286 | 0.9818 | 0.5202 | 0.5745 | 0.5219 | 0.5042 | 0.5038 |
| 1.7416 | 3.0 | 429 | 1.1233 | 0.3220 | 0.2102 | 0.3259 | 0.2190 | 0.2174 |
| 2.2168 | 4.0 | 572 | 1.1135 | 0.3277 | 0.1092 | 0.3333 | 0.1646 | 0.1618 |
| 2.2099 | 5.0 | 715 | 1.1089 | 0.3277 | 0.1092 | 0.3333 | 0.1646 | 0.1618 |
| 2.2191 | 6.0 | 858 | 1.1231 | 0.3282 | 0.4426 | 0.3338 | 0.1655 | 0.1627 |
| 2.2027 | 7.0 | 1001 | 1.0931 | 0.3774 | 0.2508 | 0.3801 | 0.3016 | 0.2993 |
| 2.1846 | 8.0 | 1144 | 1.0723 | 0.4013 | 0.3861 | 0.3995 | 0.3692 | 0.3705 |
| 2.1232 | 9.0 | 1287 | 1.0461 | 0.4244 | 0.4225 | 0.4248 | 0.4203 | 0.4202 |
| 2.0586 | 10.0 | 1430 | 1.0345 | 0.4510 | 0.4495 | 0.4494 | 0.4210 | 0.4220 |
| 2.0578 | 11.0 | 1573 | 1.0390 | 0.4523 | 0.4797 | 0.4511 | 0.4522 | 0.4525 |
| 2.0289 | 12.0 | 1716 | 1.0626 | 0.4665 | 0.5296 | 0.4668 | 0.4391 | 0.4389 |
| 1.5688 | 13.0 | 1859 | 0.8686 | 0.6084 | 0.6082 | 0.6089 | 0.6064 | 0.6061 |
| 1.2262 | 14.0 | 2002 | 0.9452 | 0.5973 | 0.5972 | 0.5978 | 0.5961 | 0.5958 |
| 0.6694 | 15.0 | 2145 | 1.2849 | 0.5809 | 0.5809 | 0.5817 | 0.5802 | 0.5798 |
| 0.2152 | 16.0 | 2288 | 1.9241 | 0.5752 | 0.5760 | 0.5753 | 0.5755 | 0.5753 |
| 0.043 | 17.0 | 2431 | 2.3196 | 0.5672 | 0.5685 | 0.5673 | 0.5675 | 0.5672 |
| 0.0074 | 18.0 | 2574 | 2.5393 | 0.5734 | 0.5747 | 0.5736 | 0.5740 | 0.5737 |
| 0.0015 | 19.0 | 2717 | 2.5970 | 0.5769 | 0.5780 | 0.5772 | 0.5776 | 0.5772 |
| 0.002 | 20.0 | 2860 | 2.6038 | 0.5787 | 0.5794 | 0.5790 | 0.5792 | 0.5788 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.7.0+cu126
- Datasets 4.0.0
- Tokenizers 0.21.4
|
djc05142/cst_quantized_model_v3
|
djc05142
| 2025-08-19T07:02:11Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T07:00:56Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755586733
|
0xaoyama
| 2025-08-19T06:59:26Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:59:14Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755586670
|
IvanJAjebu
| 2025-08-19T06:59:19Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:58:56Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jtekt-physical-ai/lerobot_actv2
|
jtekt-physical-ai
| 2025-08-19T06:59:15Z
| 0
| 0
|
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:yurayuray/retainer_mizoguchi3",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-19T05:57:00Z
|
---
datasets: yurayuray/retainer_mizoguchi3
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- lerobot
- robotics
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
XiangDeyi/xdytest1
|
XiangDeyi
| 2025-08-19T06:58:14Z
| 0
| 0
| null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-19T06:58:14Z
|
---
license: apache-2.0
---
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755584966
|
mang3dd
| 2025-08-19T06:57:47Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:57:44Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
donoway/ARC-Easy_Llama-3.2-1B-xl28q3hn
|
donoway
| 2025-08-19T06:57:36Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T06:46:08Z
|
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Easy_Llama-3.2-1B-xl28q3hn
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Easy_Llama-3.2-1B-xl28q3hn
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2555
- Model Preparation Time: 0.006
- Mdl: 1032.4540
- Accumulated Loss: 715.6426
- Correct Preds: 291.0
- Total Preds: 570.0
- Accuracy: 0.5105
- Correct Gen Preds: 291.0
- Gen Accuracy: 0.5105
- Correct Gen Preds 32: 98.0
- Correct Preds 32: 98.0
- Total Labels 32: 158.0
- Accuracy 32: 0.6203
- Gen Accuracy 32: 0.6203
- Correct Gen Preds 33: 130.0
- Correct Preds 33: 130.0
- Total Labels 33: 152.0
- Accuracy 33: 0.8553
- Gen Accuracy 33: 0.8553
- Correct Gen Preds 34: 40.0
- Correct Preds 34: 40.0
- Total Labels 34: 142.0
- Accuracy 34: 0.2817
- Gen Accuracy 34: 0.2817
- Correct Gen Preds 35: 23.0
- Correct Preds 35: 23.0
- Total Labels 35: 118.0
- Accuracy 35: 0.1949
- Gen Accuracy 35: 0.1949
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 0.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.5354 | 0.006 | 1262.6022 | 875.1692 | 172.0 | 570.0 | 0.3018 | 170.0 | 0.2982 | 154.0 | 154.0 | 158.0 | 0.9747 | 0.9747 | 0.0 | 0.0 | 152.0 | 0.0 | 0.0 | 15.0 | 17.0 | 142.0 | 0.1197 | 0.1056 | 1.0 | 1.0 | 118.0 | 0.0085 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3552 | 1.0 | 1 | 1.5354 | 0.006 | 1262.6022 | 875.1692 | 172.0 | 570.0 | 0.3018 | 170.0 | 0.2982 | 154.0 | 154.0 | 158.0 | 0.9747 | 0.9747 | 0.0 | 0.0 | 152.0 | 0.0 | 0.0 | 15.0 | 17.0 | 142.0 | 0.1197 | 0.1056 | 1.0 | 1.0 | 118.0 | 0.0085 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3552 | 2.0 | 2 | 2.4687 | 0.006 | 2030.1287 | 1407.1780 | 221.0 | 570.0 | 0.3877 | 221.0 | 0.3877 | 0.0 | 0.0 | 158.0 | 0.0 | 0.0 | 85.0 | 85.0 | 152.0 | 0.5592 | 0.5592 | 136.0 | 136.0 | 142.0 | 0.9577 | 0.9577 | 0.0 | 0.0 | 118.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.7603 | 3.0 | 3 | 1.2555 | 0.006 | 1032.4540 | 715.6426 | 291.0 | 570.0 | 0.5105 | 291.0 | 0.5105 | 98.0 | 98.0 | 158.0 | 0.6203 | 0.6203 | 130.0 | 130.0 | 152.0 | 0.8553 | 0.8553 | 40.0 | 40.0 | 142.0 | 0.2817 | 0.2817 | 23.0 | 23.0 | 118.0 | 0.1949 | 0.1949 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.4267 | 4.0 | 4 | 2.5733 | 0.006 | 2116.1258 | 1466.7867 | 261.0 | 570.0 | 0.4579 | 260.0 | 0.4561 | 151.0 | 152.0 | 158.0 | 0.9620 | 0.9557 | 39.0 | 39.0 | 152.0 | 0.2566 | 0.2566 | 42.0 | 42.0 | 142.0 | 0.2958 | 0.2958 | 28.0 | 28.0 | 118.0 | 0.2373 | 0.2373 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0491 | 5.0 | 5 | 3.1596 | 0.006 | 2598.2545 | 1800.9728 | 284.0 | 570.0 | 0.4982 | 284.0 | 0.4982 | 151.0 | 151.0 | 158.0 | 0.9557 | 0.9557 | 56.0 | 56.0 | 152.0 | 0.3684 | 0.3684 | 50.0 | 50.0 | 142.0 | 0.3521 | 0.3521 | 27.0 | 27.0 | 118.0 | 0.2288 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0044 | 6.0 | 6 | 4.0391 | 0.006 | 3321.5305 | 2302.3095 | 262.0 | 570.0 | 0.4596 | 259.0 | 0.4544 | 151.0 | 152.0 | 158.0 | 0.9620 | 0.9557 | 41.0 | 41.0 | 152.0 | 0.2697 | 0.2697 | 44.0 | 45.0 | 142.0 | 0.3169 | 0.3099 | 23.0 | 24.0 | 118.0 | 0.2034 | 0.1949 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0001 | 7.0 | 7 | 4.4151 | 0.006 | 3630.7350 | 2516.6338 | 253.0 | 570.0 | 0.4439 | 239.0 | 0.4193 | 144.0 | 152.0 | 158.0 | 0.9620 | 0.9114 | 36.0 | 38.0 | 152.0 | 0.25 | 0.2368 | 38.0 | 41.0 | 142.0 | 0.2887 | 0.2676 | 21.0 | 22.0 | 118.0 | 0.1864 | 0.1780 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 8.0 | 8 | 4.5569 | 0.006 | 3747.3361 | 2597.4554 | 250.0 | 570.0 | 0.4386 | 223.0 | 0.3912 | 135.0 | 154.0 | 158.0 | 0.9747 | 0.8544 | 35.0 | 38.0 | 152.0 | 0.25 | 0.2303 | 35.0 | 39.0 | 142.0 | 0.2746 | 0.2465 | 18.0 | 19.0 | 118.0 | 0.1610 | 0.1525 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 9.0 | 9 | 4.6453 | 0.006 | 3819.9784 | 2647.8072 | 247.0 | 570.0 | 0.4333 | 204.0 | 0.3579 | 123.0 | 152.0 | 158.0 | 0.9620 | 0.7785 | 33.0 | 39.0 | 152.0 | 0.2566 | 0.2171 | 31.0 | 37.0 | 142.0 | 0.2606 | 0.2183 | 17.0 | 19.0 | 118.0 | 0.1610 | 0.1441 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0001 | 10.0 | 10 | 4.8047 | 0.006 | 3951.0414 | 2738.6532 | 242.0 | 570.0 | 0.4246 | 203.0 | 0.3561 | 123.0 | 152.0 | 158.0 | 0.9620 | 0.7785 | 35.0 | 39.0 | 152.0 | 0.2566 | 0.2303 | 30.0 | 33.0 | 142.0 | 0.2324 | 0.2113 | 15.0 | 18.0 | 118.0 | 0.1525 | 0.1271 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 11.0 | 11 | 5.0241 | 0.006 | 4131.5031 | 2863.7397 | 236.0 | 570.0 | 0.4140 | 201.0 | 0.3526 | 125.0 | 153.0 | 158.0 | 0.9684 | 0.7911 | 34.0 | 37.0 | 152.0 | 0.2434 | 0.2237 | 28.0 | 29.0 | 142.0 | 0.2042 | 0.1972 | 14.0 | 17.0 | 118.0 | 0.1441 | 0.1186 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 12 | 5.2229 | 0.006 | 4295.0154 | 2977.0778 | 235.0 | 570.0 | 0.4123 | 203.0 | 0.3561 | 129.0 | 154.0 | 158.0 | 0.9747 | 0.8165 | 32.0 | 36.0 | 152.0 | 0.2368 | 0.2105 | 28.0 | 29.0 | 142.0 | 0.2042 | 0.1972 | 14.0 | 16.0 | 118.0 | 0.1356 | 0.1186 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 13 | 5.3741 | 0.006 | 4419.3154 | 3063.2360 | 235.0 | 570.0 | 0.4123 | 202.0 | 0.3544 | 129.0 | 155.0 | 158.0 | 0.9810 | 0.8165 | 31.0 | 35.0 | 152.0 | 0.2303 | 0.2039 | 28.0 | 29.0 | 142.0 | 0.2042 | 0.1972 | 14.0 | 16.0 | 118.0 | 0.1356 | 0.1186 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 14 | 5.5052 | 0.006 | 4527.0926 | 3137.9415 | 235.0 | 570.0 | 0.4123 | 207.0 | 0.3632 | 135.0 | 156.0 | 158.0 | 0.9873 | 0.8544 | 31.0 | 35.0 | 152.0 | 0.2303 | 0.2039 | 27.0 | 28.0 | 142.0 | 0.1972 | 0.1901 | 14.0 | 16.0 | 118.0 | 0.1356 | 0.1186 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 15 | 5.5976 | 0.006 | 4603.0781 | 3190.6106 | 234.0 | 570.0 | 0.4105 | 207.0 | 0.3632 | 135.0 | 156.0 | 158.0 | 0.9873 | 0.8544 | 32.0 | 35.0 | 152.0 | 0.2303 | 0.2105 | 26.0 | 28.0 | 142.0 | 0.1972 | 0.1831 | 14.0 | 15.0 | 118.0 | 0.1271 | 0.1186 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 16.0 | 16 | 5.6853 | 0.006 | 4675.2022 | 3240.6032 | 228.0 | 570.0 | 0.4 | 206.0 | 0.3614 | 138.0 | 155.0 | 158.0 | 0.9810 | 0.8734 | 29.0 | 32.0 | 152.0 | 0.2105 | 0.1908 | 26.0 | 27.0 | 142.0 | 0.1901 | 0.1831 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 17.0 | 17 | 5.7800 | 0.006 | 4753.1165 | 3294.6093 | 228.0 | 570.0 | 0.4 | 207.0 | 0.3632 | 141.0 | 156.0 | 158.0 | 0.9873 | 0.8924 | 29.0 | 31.0 | 152.0 | 0.2039 | 0.1908 | 24.0 | 27.0 | 142.0 | 0.1901 | 0.1690 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 18.0 | 18 | 5.8437 | 0.006 | 4805.4763 | 3330.9024 | 227.0 | 570.0 | 0.3982 | 207.0 | 0.3632 | 141.0 | 156.0 | 158.0 | 0.9873 | 0.8924 | 29.0 | 30.0 | 152.0 | 0.1974 | 0.1908 | 24.0 | 27.0 | 142.0 | 0.1901 | 0.1690 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 19.0 | 19 | 5.9488 | 0.006 | 4891.9541 | 3390.8442 | 226.0 | 570.0 | 0.3965 | 206.0 | 0.3614 | 141.0 | 156.0 | 158.0 | 0.9873 | 0.8924 | 28.0 | 29.0 | 152.0 | 0.1908 | 0.1842 | 24.0 | 27.0 | 142.0 | 0.1901 | 0.1690 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 20.0 | 20 | 5.9804 | 0.006 | 4917.8580 | 3408.7994 | 226.0 | 570.0 | 0.3965 | 206.0 | 0.3614 | 141.0 | 156.0 | 158.0 | 0.9873 | 0.8924 | 28.0 | 29.0 | 152.0 | 0.1908 | 0.1842 | 24.0 | 27.0 | 142.0 | 0.1901 | 0.1690 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 21.0 | 21 | 6.0239 | 0.006 | 4953.6373 | 3433.5997 | 226.0 | 570.0 | 0.3965 | 206.0 | 0.3614 | 141.0 | 156.0 | 158.0 | 0.9873 | 0.8924 | 28.0 | 29.0 | 152.0 | 0.1908 | 0.1842 | 24.0 | 27.0 | 142.0 | 0.1901 | 0.1690 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 22.0 | 22 | 6.0758 | 0.006 | 4996.3676 | 3463.2181 | 225.0 | 570.0 | 0.3947 | 206.0 | 0.3614 | 141.0 | 156.0 | 158.0 | 0.9873 | 0.8924 | 28.0 | 29.0 | 152.0 | 0.1908 | 0.1842 | 24.0 | 26.0 | 142.0 | 0.1831 | 0.1690 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 23.0 | 23 | 6.0958 | 0.006 | 5012.8294 | 3474.6285 | 225.0 | 570.0 | 0.3947 | 207.0 | 0.3632 | 141.0 | 156.0 | 158.0 | 0.9873 | 0.8924 | 28.0 | 29.0 | 152.0 | 0.1908 | 0.1842 | 25.0 | 26.0 | 142.0 | 0.1831 | 0.1761 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 24.0 | 24 | 6.1508 | 0.006 | 5057.9994 | 3505.9380 | 225.0 | 570.0 | 0.3947 | 209.0 | 0.3667 | 144.0 | 156.0 | 158.0 | 0.9873 | 0.9114 | 28.0 | 29.0 | 152.0 | 0.1908 | 0.1842 | 24.0 | 26.0 | 142.0 | 0.1831 | 0.1690 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 25.0 | 25 | 6.1477 | 0.006 | 5055.4455 | 3504.1678 | 224.0 | 570.0 | 0.3930 | 208.0 | 0.3649 | 143.0 | 156.0 | 158.0 | 0.9873 | 0.9051 | 27.0 | 28.0 | 152.0 | 0.1842 | 0.1776 | 25.0 | 26.0 | 142.0 | 0.1831 | 0.1761 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 26.0 | 26 | 6.1921 | 0.006 | 5092.0041 | 3529.5083 | 224.0 | 570.0 | 0.3930 | 208.0 | 0.3649 | 144.0 | 156.0 | 158.0 | 0.9873 | 0.9114 | 26.0 | 27.0 | 152.0 | 0.1776 | 0.1711 | 25.0 | 27.0 | 142.0 | 0.1901 | 0.1761 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 27.0 | 27 | 6.2041 | 0.006 | 5101.8523 | 3536.3346 | 224.0 | 570.0 | 0.3930 | 208.0 | 0.3649 | 144.0 | 156.0 | 158.0 | 0.9873 | 0.9114 | 26.0 | 27.0 | 152.0 | 0.1776 | 0.1711 | 25.0 | 27.0 | 142.0 | 0.1901 | 0.1761 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 28.0 | 28 | 6.2060 | 0.006 | 5103.4059 | 3537.4114 | 225.0 | 570.0 | 0.3947 | 208.0 | 0.3649 | 144.0 | 156.0 | 158.0 | 0.9873 | 0.9114 | 27.0 | 28.0 | 152.0 | 0.1842 | 0.1776 | 24.0 | 27.0 | 142.0 | 0.1901 | 0.1690 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 29.0 | 29 | 6.2192 | 0.006 | 5114.2474 | 3544.9262 | 225.0 | 570.0 | 0.3947 | 209.0 | 0.3667 | 145.0 | 156.0 | 158.0 | 0.9873 | 0.9177 | 27.0 | 28.0 | 152.0 | 0.1842 | 0.1776 | 24.0 | 27.0 | 142.0 | 0.1901 | 0.1690 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 30.0 | 30 | 6.2327 | 0.006 | 5125.3556 | 3552.6258 | 221.0 | 570.0 | 0.3877 | 206.0 | 0.3614 | 144.0 | 156.0 | 158.0 | 0.9873 | 0.9114 | 26.0 | 27.0 | 152.0 | 0.1776 | 0.1711 | 23.0 | 25.0 | 142.0 | 0.1761 | 0.1620 | 13.0 | 13.0 | 118.0 | 0.1102 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 31.0 | 31 | 6.2450 | 0.006 | 5135.5071 | 3559.6623 | 222.0 | 570.0 | 0.3895 | 206.0 | 0.3614 | 144.0 | 156.0 | 158.0 | 0.9873 | 0.9114 | 26.0 | 27.0 | 152.0 | 0.1776 | 0.1711 | 23.0 | 26.0 | 142.0 | 0.1831 | 0.1620 | 13.0 | 13.0 | 118.0 | 0.1102 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 32.0 | 32 | 6.2478 | 0.006 | 5137.7630 | 3561.2259 | 224.0 | 570.0 | 0.3930 | 210.0 | 0.3684 | 146.0 | 156.0 | 158.0 | 0.9873 | 0.9241 | 27.0 | 28.0 | 152.0 | 0.1842 | 0.1776 | 24.0 | 26.0 | 142.0 | 0.1831 | 0.1690 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 33.0 | 33 | 6.2653 | 0.006 | 5152.1581 | 3571.2038 | 224.0 | 570.0 | 0.3930 | 209.0 | 0.3667 | 146.0 | 156.0 | 158.0 | 0.9873 | 0.9241 | 26.0 | 27.0 | 152.0 | 0.1776 | 0.1711 | 24.0 | 27.0 | 142.0 | 0.1901 | 0.1690 | 13.0 | 14.0 | 118.0 | 0.1186 | 0.1102 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755586541
|
0xaoyama
| 2025-08-19T06:56:13Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:56:02Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
resistz/sft_Qwen3-4B-Base_ultra200k
|
resistz
| 2025-08-19T06:53:44Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T06:49:25Z
|
---
library_name: transformers
model_name: sft_Qwen3-4B-Base_ultra200k
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for sft_Qwen3-4B-Base_ultra200k
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/resistzzz97/Alignment_Influence/runs/eswkk8st)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
phospho-app/plungedplummer-gr00t-PickUp2-nhi1p
|
phospho-app
| 2025-08-19T06:50:53Z
| 0
| 0
|
phosphobot
|
[
"phosphobot",
"safetensors",
"gr00t_n1_5",
"gr00t",
"robotics",
"dataset:plungedplummer/PickUp2",
"region:us"
] |
robotics
| 2025-08-19T06:19:11Z
|
---
datasets: plungedplummer/PickUp2
library_name: phosphobot
pipeline_tag: robotics
model_name: gr00t
tags:
- phosphobot
- gr00t
task_categories:
- robotics
---
# gr00t Model - phospho Training Pipeline
## This model was trained using **phospho**.
Training was successful, try it out on your robot!
## Training parameters:
- **Dataset**: [plungedplummer/PickUp2](https://huggingface.co/datasets/plungedplummer/PickUp2)
- **Wandb run URL**: None
- **Epochs**: 10
- **Batch size**: 107
- **Training steps**: None
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
donoway/ARC-Challenge_Llama-3.2-1B-r0yf05qb
|
donoway
| 2025-08-19T06:50:44Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T06:33:14Z
|
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Challenge_Llama-3.2-1B-r0yf05qb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Challenge_Llama-3.2-1B-r0yf05qb
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.9607
- Model Preparation Time: 0.0059
- Mdl: 2139.8614
- Accumulated Loss: 1483.2389
- Correct Preds: 77.0
- Total Preds: 299.0
- Accuracy: 0.2575
- Correct Gen Preds: 15.0
- Gen Accuracy: 0.0502
- Correct Gen Preds 32: 0.0
- Correct Preds 32: 2.0
- Total Labels 32: 64.0
- Accuracy 32: 0.0312
- Gen Accuracy 32: 0.0
- Correct Gen Preds 33: 11.0
- Correct Preds 33: 60.0
- Total Labels 33: 73.0
- Accuracy 33: 0.8219
- Gen Accuracy 33: 0.1507
- Correct Gen Preds 34: 2.0
- Correct Preds 34: 10.0
- Total Labels 34: 78.0
- Accuracy 34: 0.1282
- Gen Accuracy 34: 0.0256
- Correct Gen Preds 35: 2.0
- Correct Preds 35: 5.0
- Total Labels 35: 83.0
- Accuracy 35: 0.0602
- Gen Accuracy 35: 0.0241
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 1.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.6389 | 0.0059 | 706.9523 | 490.0220 | 66.0 | 299.0 | 0.2207 | 66.0 | 0.2207 | 62.0 | 62.0 | 64.0 | 0.9688 | 0.9688 | 0.0 | 0.0 | 73.0 | 0.0 | 0.0 | 4.0 | 4.0 | 78.0 | 0.0513 | 0.0513 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.7061 | 1.0 | 1 | 1.6389 | 0.0059 | 706.9523 | 490.0220 | 66.0 | 299.0 | 0.2207 | 66.0 | 0.2207 | 62.0 | 62.0 | 64.0 | 0.9688 | 0.9688 | 0.0 | 0.0 | 73.0 | 0.0 | 0.0 | 4.0 | 4.0 | 78.0 | 0.0513 | 0.0513 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.7128 | 2.0 | 2 | 2.9134 | 0.0059 | 1256.7296 | 871.0986 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.1696 | 3.0 | 3 | 2.2709 | 0.0059 | 979.5827 | 678.9950 | 76.0 | 299.0 | 0.2542 | 4.0 | 0.0134 | 1.0 | 22.0 | 64.0 | 0.3438 | 0.0156 | 2.0 | 50.0 | 73.0 | 0.6849 | 0.0274 | 1.0 | 1.0 | 78.0 | 0.0128 | 0.0128 | 0.0 | 3.0 | 83.0 | 0.0361 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.8314 | 4.0 | 4 | 1.8837 | 0.0059 | 812.5640 | 563.2265 | 75.0 | 299.0 | 0.2508 | 70.0 | 0.2341 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 68.0 | 73.0 | 73.0 | 1.0 | 0.9315 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.4234 | 5.0 | 5 | 2.7987 | 0.0059 | 1207.2848 | 836.8261 | 73.0 | 299.0 | 0.2441 | 71.0 | 0.2375 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 71.0 | 73.0 | 73.0 | 1.0 | 0.9726 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.1618 | 6.0 | 6 | 3.2839 | 0.0059 | 1416.5756 | 981.8954 | 75.0 | 299.0 | 0.2508 | 48.0 | 0.1605 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 47.0 | 73.0 | 73.0 | 1.0 | 0.6438 | 1.0 | 2.0 | 78.0 | 0.0256 | 0.0128 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0192 | 7.0 | 7 | 3.7217 | 0.0059 | 1605.4263 | 1112.7967 | 74.0 | 299.0 | 0.2475 | 23.0 | 0.0769 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 20.0 | 69.0 | 73.0 | 0.9452 | 0.2740 | 1.0 | 3.0 | 78.0 | 0.0385 | 0.0128 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0008 | 8.0 | 8 | 3.9983 | 0.0059 | 1724.7310 | 1195.4925 | 74.0 | 299.0 | 0.2475 | 17.0 | 0.0569 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 14.0 | 66.0 | 73.0 | 0.9041 | 0.1918 | 1.0 | 5.0 | 78.0 | 0.0641 | 0.0128 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0002 | 9.0 | 9 | 4.2129 | 0.0059 | 1817.3188 | 1259.6694 | 72.0 | 299.0 | 0.2408 | 16.0 | 0.0535 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 13.0 | 64.0 | 73.0 | 0.8767 | 0.1781 | 1.0 | 5.0 | 78.0 | 0.0641 | 0.0128 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0001 | 10.0 | 10 | 4.3845 | 0.0059 | 1891.3416 | 1310.9781 | 72.0 | 299.0 | 0.2408 | 16.0 | 0.0535 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 13.0 | 63.0 | 73.0 | 0.8630 | 0.1781 | 1.0 | 5.0 | 78.0 | 0.0641 | 0.0128 | 2.0 | 3.0 | 83.0 | 0.0361 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0001 | 11.0 | 11 | 4.4976 | 0.0059 | 1940.0996 | 1344.7746 | 74.0 | 299.0 | 0.2475 | 15.0 | 0.0502 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 12.0 | 63.0 | 73.0 | 0.8630 | 0.1644 | 1.0 | 6.0 | 78.0 | 0.0769 | 0.0128 | 2.0 | 4.0 | 83.0 | 0.0482 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 12 | 4.5953 | 0.0059 | 1982.2350 | 1373.9806 | 73.0 | 299.0 | 0.2441 | 15.0 | 0.0502 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 12.0 | 61.0 | 73.0 | 0.8356 | 0.1644 | 2.0 | 7.0 | 78.0 | 0.0897 | 0.0256 | 1.0 | 4.0 | 83.0 | 0.0482 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 13 | 4.6757 | 0.0059 | 2016.9523 | 1398.0448 | 74.0 | 299.0 | 0.2475 | 15.0 | 0.0502 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 11.0 | 61.0 | 73.0 | 0.8356 | 0.1507 | 2.0 | 8.0 | 78.0 | 0.1026 | 0.0256 | 2.0 | 4.0 | 83.0 | 0.0482 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 14 | 4.7315 | 0.0059 | 2041.0281 | 1414.7328 | 74.0 | 299.0 | 0.2475 | 14.0 | 0.0468 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 8.0 | 78.0 | 0.1026 | 0.0256 | 1.0 | 4.0 | 83.0 | 0.0482 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 15 | 4.7760 | 0.0059 | 2060.2072 | 1428.0268 | 74.0 | 299.0 | 0.2475 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 8.0 | 78.0 | 0.1026 | 0.0256 | 2.0 | 4.0 | 83.0 | 0.0482 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 16.0 | 16 | 4.8067 | 0.0059 | 2073.4479 | 1437.2046 | 74.0 | 299.0 | 0.2475 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 8.0 | 78.0 | 0.1026 | 0.0256 | 2.0 | 4.0 | 83.0 | 0.0482 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 17.0 | 17 | 4.8319 | 0.0059 | 2084.3267 | 1444.7451 | 73.0 | 299.0 | 0.2441 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 59.0 | 73.0 | 0.8082 | 0.1507 | 2.0 | 8.0 | 78.0 | 0.1026 | 0.0256 | 2.0 | 4.0 | 83.0 | 0.0482 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 18.0 | 18 | 4.8644 | 0.0059 | 2098.3559 | 1454.4695 | 76.0 | 299.0 | 0.2542 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 9.0 | 78.0 | 0.1154 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 19.0 | 19 | 4.8834 | 0.0059 | 2106.5338 | 1460.1380 | 76.0 | 299.0 | 0.2542 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 9.0 | 78.0 | 0.1154 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 20.0 | 20 | 4.8959 | 0.0059 | 2111.9169 | 1463.8692 | 76.0 | 299.0 | 0.2542 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 9.0 | 78.0 | 0.1154 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 21.0 | 21 | 4.9055 | 0.0059 | 2116.0587 | 1466.7401 | 76.0 | 299.0 | 0.2542 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 9.0 | 78.0 | 0.1154 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 22.0 | 22 | 4.9121 | 0.0059 | 2118.9142 | 1468.7194 | 76.0 | 299.0 | 0.2542 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 9.0 | 78.0 | 0.1154 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 23.0 | 23 | 4.9255 | 0.0059 | 2124.6990 | 1472.7291 | 76.0 | 299.0 | 0.2542 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 9.0 | 78.0 | 0.1154 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 24.0 | 24 | 4.9346 | 0.0059 | 2128.6175 | 1475.4452 | 76.0 | 299.0 | 0.2542 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 9.0 | 78.0 | 0.1154 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 25.0 | 25 | 4.9462 | 0.0059 | 2133.6370 | 1478.9245 | 74.0 | 299.0 | 0.2475 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 59.0 | 73.0 | 0.8082 | 0.1507 | 2.0 | 8.0 | 78.0 | 0.1026 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 26.0 | 26 | 4.9465 | 0.0059 | 2133.7463 | 1479.0002 | 75.0 | 299.0 | 0.2508 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 8.0 | 78.0 | 0.1026 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 27.0 | 27 | 4.9520 | 0.0059 | 2136.1279 | 1480.6510 | 75.0 | 299.0 | 0.2508 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 59.0 | 73.0 | 0.8082 | 0.1507 | 2.0 | 9.0 | 78.0 | 0.1154 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 28.0 | 28 | 4.9488 | 0.0059 | 2134.7464 | 1479.6934 | 75.0 | 299.0 | 0.2508 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 8.0 | 78.0 | 0.1026 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 29.0 | 29 | 4.9554 | 0.0059 | 2137.6001 | 1481.6715 | 74.0 | 299.0 | 0.2475 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 59.0 | 73.0 | 0.8082 | 0.1507 | 2.0 | 8.0 | 78.0 | 0.1026 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 30.0 | 30 | 4.9554 | 0.0059 | 2137.6041 | 1481.6743 | 76.0 | 299.0 | 0.2542 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 9.0 | 78.0 | 0.1154 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 31.0 | 31 | 4.9607 | 0.0059 | 2139.8614 | 1483.2389 | 77.0 | 299.0 | 0.2575 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 10.0 | 78.0 | 0.1282 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 32.0 | 32 | 4.9608 | 0.0059 | 2139.9391 | 1483.2928 | 75.0 | 299.0 | 0.2508 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 8.0 | 78.0 | 0.1026 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 33.0 | 33 | 4.9612 | 0.0059 | 2140.0986 | 1483.4033 | 75.0 | 299.0 | 0.2508 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 8.0 | 78.0 | 0.1026 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 34.0 | 34 | 4.9602 | 0.0059 | 2139.6793 | 1483.1127 | 77.0 | 299.0 | 0.2575 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 10.0 | 78.0 | 0.1282 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 35.0 | 35 | 4.9670 | 0.0059 | 2142.5922 | 1485.1317 | 75.0 | 299.0 | 0.2508 | 16.0 | 0.0535 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 3.0 | 8.0 | 78.0 | 0.1026 | 0.0385 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 36.0 | 36 | 4.9635 | 0.0059 | 2141.0976 | 1484.0958 | 75.0 | 299.0 | 0.2508 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 8.0 | 78.0 | 0.1026 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 37.0 | 37 | 4.9663 | 0.0059 | 2142.2723 | 1484.9100 | 76.0 | 299.0 | 0.2542 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 9.0 | 78.0 | 0.1154 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 38.0 | 38 | 4.9711 | 0.0059 | 2144.3738 | 1486.3666 | 76.0 | 299.0 | 0.2542 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 9.0 | 78.0 | 0.1154 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 39.0 | 39 | 4.9587 | 0.0059 | 2139.0114 | 1482.6497 | 76.0 | 299.0 | 0.2542 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 9.0 | 78.0 | 0.1154 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 40.0 | 40 | 4.9709 | 0.0059 | 2144.2620 | 1486.2892 | 75.0 | 299.0 | 0.2508 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 8.0 | 78.0 | 0.1026 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 41.0 | 41 | 4.9670 | 0.0059 | 2142.5850 | 1485.1268 | 76.0 | 299.0 | 0.2542 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 9.0 | 78.0 | 0.1154 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 42.0 | 42 | 4.9677 | 0.0059 | 2142.8901 | 1485.3383 | 75.0 | 299.0 | 0.2508 | 16.0 | 0.0535 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 3.0 | 8.0 | 78.0 | 0.1026 | 0.0385 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 43.0 | 43 | 4.9700 | 0.0059 | 2143.8805 | 1486.0247 | 77.0 | 299.0 | 0.2575 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 10.0 | 78.0 | 0.1282 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 44.0 | 44 | 4.9743 | 0.0059 | 2145.7331 | 1487.3088 | 76.0 | 299.0 | 0.2542 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 9.0 | 78.0 | 0.1154 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 45.0 | 45 | 4.9644 | 0.0059 | 2141.4820 | 1484.3622 | 76.0 | 299.0 | 0.2542 | 16.0 | 0.0535 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 3.0 | 9.0 | 78.0 | 0.1154 | 0.0385 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 46.0 | 46 | 4.9724 | 0.0059 | 2144.9162 | 1486.7426 | 77.0 | 299.0 | 0.2575 | 16.0 | 0.0535 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 3.0 | 10.0 | 78.0 | 0.1282 | 0.0385 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 47.0 | 47 | 4.9662 | 0.0059 | 2142.2475 | 1484.8928 | 77.0 | 299.0 | 0.2575 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 10.0 | 78.0 | 0.1282 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 48.0 | 48 | 4.9728 | 0.0059 | 2145.0799 | 1486.8561 | 76.0 | 299.0 | 0.2542 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 9.0 | 78.0 | 0.1154 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 49.0 | 49 | 4.9655 | 0.0059 | 2141.9301 | 1484.6728 | 76.0 | 299.0 | 0.2542 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 9.0 | 78.0 | 0.1154 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 50.0 | 50 | 4.9758 | 0.0059 | 2146.4115 | 1487.7791 | 76.0 | 299.0 | 0.2542 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 9.0 | 78.0 | 0.1154 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 51.0 | 51 | 4.9633 | 0.0059 | 2141.0096 | 1484.0348 | 77.0 | 299.0 | 0.2575 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 10.0 | 78.0 | 0.1282 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 52.0 | 52 | 4.9658 | 0.0059 | 2142.0944 | 1484.7867 | 76.0 | 299.0 | 0.2542 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 9.0 | 78.0 | 0.1154 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 53.0 | 53 | 4.9699 | 0.0059 | 2143.8417 | 1485.9978 | 75.0 | 299.0 | 0.2508 | 16.0 | 0.0535 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 3.0 | 8.0 | 78.0 | 0.1026 | 0.0385 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 54.0 | 54 | 4.9681 | 0.0059 | 2143.0609 | 1485.4566 | 77.0 | 299.0 | 0.2575 | 16.0 | 0.0535 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 3.0 | 10.0 | 78.0 | 0.1282 | 0.0385 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 55.0 | 55 | 4.9687 | 0.0059 | 2143.3099 | 1485.6292 | 75.0 | 299.0 | 0.2508 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 59.0 | 73.0 | 0.8082 | 0.1507 | 2.0 | 9.0 | 78.0 | 0.1154 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 56.0 | 56 | 4.9649 | 0.0059 | 2141.6995 | 1484.5129 | 76.0 | 299.0 | 0.2542 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 9.0 | 78.0 | 0.1154 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 57.0 | 57 | 4.9705 | 0.0059 | 2144.0945 | 1486.1730 | 75.0 | 299.0 | 0.2508 | 16.0 | 0.0535 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 3.0 | 8.0 | 78.0 | 0.1026 | 0.0385 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 58.0 | 58 | 4.9699 | 0.0059 | 2143.8280 | 1485.9883 | 77.0 | 299.0 | 0.2575 | 16.0 | 0.0535 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 3.0 | 10.0 | 78.0 | 0.1282 | 0.0385 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 59.0 | 59 | 4.9683 | 0.0059 | 2143.1700 | 1485.5323 | 75.0 | 299.0 | 0.2508 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 59.0 | 73.0 | 0.8082 | 0.1507 | 2.0 | 9.0 | 78.0 | 0.1154 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 60.0 | 60 | 4.9680 | 0.0059 | 2143.0255 | 1485.4321 | 77.0 | 299.0 | 0.2575 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 10.0 | 78.0 | 0.1282 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 61.0 | 61 | 4.9672 | 0.0059 | 2142.6706 | 1485.1861 | 75.0 | 299.0 | 0.2508 | 15.0 | 0.0502 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 11.0 | 60.0 | 73.0 | 0.8219 | 0.1507 | 2.0 | 8.0 | 78.0 | 0.1026 | 0.0256 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
seanmamasde/llama4-maverick-17B-128E-eagle3-sglang
|
seanmamasde
| 2025-08-19T06:50:08Z
| 0
| 1
| null |
[
"safetensors",
"llama",
"base_model:meta-llama/Llama-4-Maverick-17B-128E-Instruct",
"base_model:finetune:meta-llama/Llama-4-Maverick-17B-128E-Instruct",
"license:mit",
"region:us"
] | null | 2025-08-19T03:45:58Z
|
---
license: mit
base_model:
- meta-llama/Llama-4-Maverick-17B-128E-Instruct
---
> This Eagle3 Draft Model was trained using [`SpecForge`](https://github.com/sgl-project/SpecForge), you can see the exact running parameters here: [my GitHub](https://github.com/seanmamasde/SpecForge/blob/my-stuff/examples/run_llama4_eagle3_online_maverick.sh).
This model was trained because when I was testing out [the nvidia one](https://huggingface.co/nvidia/Llama-4-Maverick-17B-128E-Eagle3/tree/main), it didn't work as well as I thought should be.
And I just saw that the original EAGLE-3 author recommended using `SpecForge` to do draft model training in the README. So here it is.
## Training Settings
- Dataset: Using the built-in dataset pipeline provided by `SpecForge`, in this case, I used [this sharegpt one](https://huggingface.co/datasets/Aeala/ShareGPT_Vicuna_unfiltered)
- All other settings are the same as the `examples/run_llama4_eagle3_online.sh` provided by `SpecForge`, just `ttt-length` was shrinked from `7` to `6`.
- This model was trained using 8 Nvidia H200 GPU for 7 days (7 epochs).
- As a side note, using `ttt-length=6` will result in OOM error when running the second consecutive epoch. So I just restarted (and resumed) every 1 epoch (each epoch takes complete 24hrs anyway).
## Inferece Settings & Benchmarks
- Inferencing Framework: [`SGLang`](https://github.com/sgl-project/sglang) version `0.4.9.post3`
- Inference Backend: FlashAttn3 (FlashInfer as of now is still not available for Llama4)
- Hardware: 8 * Nvidia H200 gpus
- Workflow:
- First I started by finding the best speculative settings (for nvidia one, and my model will be using the same params), using the benchmark provided by sglang:
```bash
python scripts/playground/bench_speculative.py \
--model-path ../cache/meta-llama/Llama-4-Maverick-17B-128E \
--speculative-draft-model-path nvidia/Llama-4-Maverick-17B-128E-Eagle3 \
--steps <different step sizes, e.g., '3 4 5'> \
--topk <different top k, e.g., '8 10 12'> \
--num_draft_tokens <different step sizes, e.g., '12 24 36'> \
--batch-size <different batch sizes, e.g., '1 2 4'> \
--trust-remote-code
```
- These are the grid search result:
```jsonl
...
{"batch_size": 1, "steps": 3, "topk": 8, "num_draft_tokens": 10, "acc_length": 1.042, "step_time": 0.00984, "speed": 105.927, "completion_tokens": 512.0}
{"batch_size": 1, "steps": 4, "topk": 8, "num_draft_tokens": 10, "acc_length": 1.042, "step_time": 0.01037, "speed": 100.455, "completion_tokens": 512.0}
{"batch_size": 1, "steps": 5, "topk": 8, "num_draft_tokens": 10, "acc_length": 1.042, "step_time": 0.01084, "speed": 96.094, "completion_tokens": 512.0}
{"batch_size": 2, "steps": 3, "topk": 8, "num_draft_tokens": 10, "acc_length": 1.044, "step_time": 0.01112, "speed": 93.896, "completion_tokens": 512.0}
{"batch_size": 2, "steps": 4, "topk": 8, "num_draft_tokens": 10, "acc_length": 1.044, "step_time": 0.01168, "speed": 89.335, "completion_tokens": 512.0}
{"batch_size": 2, "steps": 5, "topk": 8, "num_draft_tokens": 10, "acc_length": 1.044, "step_time": 0.01217, "speed": 85.767, "completion_tokens": 512.0}
{"batch_size": 4, "steps": 3, "topk": 8, "num_draft_tokens": 10, "acc_length": 1.041, "step_time": 0.01341, "speed": 77.611, "completion_tokens": 512.0}
{"batch_size": 4, "steps": 4, "topk": 8, "num_draft_tokens": 10, "acc_length": 1.041, "step_time": 0.01406, "speed": 74.038, "completion_tokens": 512.0}
{"batch_size": 4, "steps": 5, "topk": 8, "num_draft_tokens": 10, "acc_length": 1.041, "step_time": 0.01463, "speed": 71.149, "completion_tokens": 512.0}
{"batch_size": 1, "steps": 2, "topk": 12, "num_draft_tokens": 24, "acc_length": 1.053, "step_time": 0.01064, "speed": 98.92, "completion_tokens": 512.0}
{"batch_size": 1, "steps": 3, "topk": 12, "num_draft_tokens": 24, "acc_length": 1.052, "step_time": 0.0108, "speed": 97.34, "completion_tokens": 512.0}
{"batch_size": 1, "steps": 3, "topk": 12, "num_draft_tokens": 36, "acc_length": 1.052, "step_time": 0.01172, "speed": 89.79, "completion_tokens": 512.0}
...
```
And inferred from the results above,
- `batch_size=1`
- `steps=3`
- `topk=8`
- `num_draft_tokens=10`
seems to yield the best results.
- Benchmark numbers:
| Item | Base Model | Eagle3 (NVDA) | Eagle3 (Mine) |
| ---------- | ---------- | ------------- | ----------------- |
| Throughput | 44.8 tok/s | 71.2 tok/s | **105.93 tok/s** |
| Mean TTFT | 161.49 ms | 51.74 ms | **46.81 ms** |
| Mean TPOT | 5.16 ms | 4.15 ms | **2.48 ms** |
As you can see in the table above, it came as a surprise to me that my model was way faster compared to the nvidia one, but at the same time I am very concerned that I might have done something so wrong that the benchmark itself is biased.
Even if the benchmark results are valid, I have no clue how my model turns out to be faster. (p.s. If someone knows the reason, please don't hesitate to reach out to me)
(I forgot to screenshot the terminal output when the inference rounds were done, sp you just gotta trust me with the table results above.)
- Using this model
- First, install the framework of your choice (either `vllm` or `sglang` should be fine, but as of 19-Aug-2025, `vllm` still doesn't support Llama4 with Eagle3 very well, also that `SpecForge` were meant for `SGlang`).
- Set the `--speculative-draft-model-path` flag in the `SGLang` launching config to this `seanmamasde/llama4-maverick-17B-128E-eagle3-sglang`, along with `--speculative-num-steps 3 --speculative-eagle-topk 8 --speculative-num-draft-tokens 10` for best results (optional).
- You're good to go!
|
MitsuiChen14/DGTRS-CLIP-ViT-L-14
|
MitsuiChen14
| 2025-08-19T06:49:08Z
| 0
| 0
| null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-03-26T06:56:43Z
|
---
license: apache-2.0
---
|
resistz/sft_Qwen3-1.7B-Base_ultra200k
|
resistz
| 2025-08-19T06:48:59Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T06:45:13Z
|
---
library_name: transformers
model_name: sft_Qwen3-1.7B-Base_ultra200k
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for sft_Qwen3-1.7B-Base_ultra200k
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/resistzzz97/Alignment_Influence/runs/tiuj57gw)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
hdong0/deepseek-Qwen-1.5B-baseline-thin-Open-R1-GRPO_deepscaler_mu_8_constant_lr_warmed
|
hdong0
| 2025-08-19T06:48:29Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen2bm",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"custom_code",
"dataset:agentica-org/DeepScaleR-Preview-Dataset",
"arxiv:2402.03300",
"base_model:hdong0/deepseek-Qwen2.5-1.5B-baseline-thin-init",
"base_model:finetune:hdong0/deepseek-Qwen2.5-1.5B-baseline-thin-init",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-08-18T23:10:15Z
|
---
base_model: hdong0/deepseek-Qwen2.5-1.5B-baseline-thin-init
datasets: agentica-org/DeepScaleR-Preview-Dataset
library_name: transformers
model_name: deepseek-Qwen-1.5B-baseline-thin-Open-R1-GRPO_deepscaler_mu_8_constant_lr_warmed
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for deepseek-Qwen-1.5B-baseline-thin-Open-R1-GRPO_deepscaler_mu_8_constant_lr_warmed
This model is a fine-tuned version of [hdong0/deepseek-Qwen2.5-1.5B-baseline-thin-init](https://huggingface.co/hdong0/deepseek-Qwen2.5-1.5B-baseline-thin-init) on the [agentica-org/DeepScaleR-Preview-Dataset](https://huggingface.co/datasets/agentica-org/DeepScaleR-Preview-Dataset) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="hdong0/deepseek-Qwen-1.5B-baseline-thin-Open-R1-GRPO_deepscaler_mu_8_constant_lr_warmed", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.0.dev0
- Transformers: 4.52.0.dev0
- Pytorch: 2.6.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755585896
|
lqpl
| 2025-08-19T06:48:01Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:45:50Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755584903
|
Sayemahsjn
| 2025-08-19T06:47:10Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:47:06Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Bearrr310/ds-train-grpo-7B-0818-dsvllm-bs2
|
Bearrr310
| 2025-08-19T06:46:59Z
| 0
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-7B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T03:01:16Z
|
---
base_model: Qwen/Qwen2.5-7B-Instruct
library_name: transformers
model_name: ds_train_grpo_7B-0818-dsvllm-bs2
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for ds_train_grpo_7B-0818-dsvllm-bs2
This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755585947
|
0xaoyama
| 2025-08-19T06:46:23Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:46:10Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
donoway/ARC-Easy_Llama-3.2-1B-qba6fe5a
|
donoway
| 2025-08-19T06:45:46Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T06:33:14Z
|
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Easy_Llama-3.2-1B-qba6fe5a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Easy_Llama-3.2-1B-qba6fe5a
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1998
- Model Preparation Time: 0.006
- Mdl: 1808.9895
- Accumulated Loss: 1253.8960
- Correct Preds: 346.0
- Total Preds: 570.0
- Accuracy: 0.6070
- Correct Gen Preds: 337.0
- Gen Accuracy: 0.5912
- Correct Gen Preds 32: 123.0
- Correct Preds 32: 131.0
- Total Labels 32: 158.0
- Accuracy 32: 0.8291
- Gen Accuracy 32: 0.7785
- Correct Gen Preds 33: 106.0
- Correct Preds 33: 106.0
- Total Labels 33: 152.0
- Accuracy 33: 0.6974
- Gen Accuracy 33: 0.6974
- Correct Gen Preds 34: 74.0
- Correct Preds 34: 75.0
- Total Labels 34: 142.0
- Accuracy 34: 0.5282
- Gen Accuracy 34: 0.5211
- Correct Gen Preds 35: 34.0
- Correct Preds 35: 34.0
- Total Labels 35: 118.0
- Accuracy 35: 0.2881
- Gen Accuracy 35: 0.2881
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 0.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.5354 | 0.006 | 1262.6022 | 875.1692 | 172.0 | 570.0 | 0.3018 | 170.0 | 0.2982 | 154.0 | 154.0 | 158.0 | 0.9747 | 0.9747 | 0.0 | 0.0 | 152.0 | 0.0 | 0.0 | 15.0 | 17.0 | 142.0 | 0.1197 | 0.1056 | 1.0 | 1.0 | 118.0 | 0.0085 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4642 | 1.0 | 1 | 1.5354 | 0.006 | 1262.6022 | 875.1692 | 172.0 | 570.0 | 0.3018 | 170.0 | 0.2982 | 154.0 | 154.0 | 158.0 | 0.9747 | 0.9747 | 0.0 | 0.0 | 152.0 | 0.0 | 0.0 | 15.0 | 17.0 | 142.0 | 0.1197 | 0.1056 | 1.0 | 1.0 | 118.0 | 0.0085 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4642 | 2.0 | 2 | 2.4299 | 0.006 | 1998.1608 | 1385.0195 | 210.0 | 570.0 | 0.3684 | 210.0 | 0.3684 | 0.0 | 0.0 | 158.0 | 0.0 | 0.0 | 144.0 | 144.0 | 152.0 | 0.9474 | 0.9474 | 66.0 | 66.0 | 142.0 | 0.4648 | 0.4648 | 0.0 | 0.0 | 118.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.7757 | 3.0 | 3 | 1.2974 | 0.006 | 1066.9296 | 739.5393 | 185.0 | 570.0 | 0.3246 | 185.0 | 0.3246 | 6.0 | 6.0 | 158.0 | 0.0380 | 0.0380 | 152.0 | 152.0 | 152.0 | 1.0 | 1.0 | 27.0 | 27.0 | 142.0 | 0.1901 | 0.1901 | 0.0 | 0.0 | 118.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.6892 | 4.0 | 4 | 2.0158 | 0.006 | 1657.6402 | 1148.9886 | 279.0 | 570.0 | 0.4895 | 279.0 | 0.4895 | 148.0 | 148.0 | 158.0 | 0.9367 | 0.9367 | 48.0 | 48.0 | 152.0 | 0.3158 | 0.3158 | 57.0 | 57.0 | 142.0 | 0.4014 | 0.4014 | 26.0 | 26.0 | 118.0 | 0.2203 | 0.2203 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1661 | 5.0 | 5 | 2.1998 | 0.006 | 1808.9895 | 1253.8960 | 346.0 | 570.0 | 0.6070 | 337.0 | 0.5912 | 123.0 | 131.0 | 158.0 | 0.8291 | 0.7785 | 106.0 | 106.0 | 152.0 | 0.6974 | 0.6974 | 74.0 | 75.0 | 142.0 | 0.5282 | 0.5211 | 34.0 | 34.0 | 118.0 | 0.2881 | 0.2881 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0079 | 6.0 | 6 | 2.8282 | 0.006 | 2325.6988 | 1612.0516 | 343.0 | 570.0 | 0.6018 | 296.0 | 0.5193 | 84.0 | 123.0 | 158.0 | 0.7785 | 0.5316 | 105.0 | 109.0 | 152.0 | 0.7171 | 0.6908 | 72.0 | 76.0 | 142.0 | 0.5352 | 0.5070 | 35.0 | 35.0 | 118.0 | 0.2966 | 0.2966 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0001 | 7.0 | 7 | 3.1565 | 0.006 | 2595.6829 | 1799.1903 | 339.0 | 570.0 | 0.5947 | 264.0 | 0.4632 | 60.0 | 117.0 | 158.0 | 0.7405 | 0.3797 | 104.0 | 111.0 | 152.0 | 0.7303 | 0.6842 | 69.0 | 76.0 | 142.0 | 0.5352 | 0.4859 | 31.0 | 35.0 | 118.0 | 0.2966 | 0.2627 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 8.0 | 8 | 3.3429 | 0.006 | 2749.0232 | 1905.4777 | 331.0 | 570.0 | 0.5807 | 236.0 | 0.4140 | 40.0 | 112.0 | 158.0 | 0.7089 | 0.2532 | 101.0 | 110.0 | 152.0 | 0.7237 | 0.6645 | 68.0 | 77.0 | 142.0 | 0.5423 | 0.4789 | 27.0 | 32.0 | 118.0 | 0.2712 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 9.0 | 9 | 3.5286 | 0.006 | 2901.6844 | 2011.2944 | 327.0 | 570.0 | 0.5737 | 228.0 | 0.4 | 41.0 | 110.0 | 158.0 | 0.6962 | 0.2595 | 99.0 | 111.0 | 152.0 | 0.7303 | 0.6513 | 61.0 | 74.0 | 142.0 | 0.5211 | 0.4296 | 27.0 | 32.0 | 118.0 | 0.2712 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 10.0 | 10 | 3.6900 | 0.006 | 3034.4363 | 2103.3110 | 323.0 | 570.0 | 0.5667 | 227.0 | 0.3982 | 41.0 | 111.0 | 158.0 | 0.7025 | 0.2595 | 97.0 | 107.0 | 152.0 | 0.7039 | 0.6382 | 62.0 | 73.0 | 142.0 | 0.5141 | 0.4366 | 27.0 | 32.0 | 118.0 | 0.2712 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 11.0 | 11 | 3.7945 | 0.006 | 3120.3216 | 2162.8421 | 323.0 | 570.0 | 0.5667 | 230.0 | 0.4035 | 43.0 | 112.0 | 158.0 | 0.7089 | 0.2722 | 98.0 | 107.0 | 152.0 | 0.7039 | 0.6447 | 63.0 | 73.0 | 142.0 | 0.5141 | 0.4437 | 26.0 | 31.0 | 118.0 | 0.2627 | 0.2203 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 12 | 3.8860 | 0.006 | 3195.5829 | 2215.0093 | 321.0 | 570.0 | 0.5632 | 227.0 | 0.3982 | 46.0 | 111.0 | 158.0 | 0.7025 | 0.2911 | 94.0 | 105.0 | 152.0 | 0.6908 | 0.6184 | 62.0 | 74.0 | 142.0 | 0.5211 | 0.4366 | 25.0 | 31.0 | 118.0 | 0.2627 | 0.2119 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 13 | 3.9627 | 0.006 | 3258.6448 | 2258.7204 | 321.0 | 570.0 | 0.5632 | 226.0 | 0.3965 | 45.0 | 110.0 | 158.0 | 0.6962 | 0.2848 | 94.0 | 106.0 | 152.0 | 0.6974 | 0.6184 | 62.0 | 74.0 | 142.0 | 0.5211 | 0.4366 | 25.0 | 31.0 | 118.0 | 0.2627 | 0.2119 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 14 | 4.0387 | 0.006 | 3321.1484 | 2302.0447 | 319.0 | 570.0 | 0.5596 | 227.0 | 0.3982 | 48.0 | 109.0 | 158.0 | 0.6899 | 0.3038 | 93.0 | 105.0 | 152.0 | 0.6908 | 0.6118 | 61.0 | 74.0 | 142.0 | 0.5211 | 0.4296 | 25.0 | 31.0 | 118.0 | 0.2627 | 0.2119 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 15 | 4.0577 | 0.006 | 3336.7945 | 2312.8897 | 319.0 | 570.0 | 0.5596 | 226.0 | 0.3965 | 48.0 | 109.0 | 158.0 | 0.6899 | 0.3038 | 91.0 | 106.0 | 152.0 | 0.6974 | 0.5987 | 60.0 | 74.0 | 142.0 | 0.5211 | 0.4225 | 27.0 | 30.0 | 118.0 | 0.2542 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 16.0 | 16 | 4.0975 | 0.006 | 3369.4997 | 2335.5592 | 317.0 | 570.0 | 0.5561 | 224.0 | 0.3930 | 50.0 | 109.0 | 158.0 | 0.6899 | 0.3165 | 88.0 | 104.0 | 152.0 | 0.6842 | 0.5789 | 60.0 | 73.0 | 142.0 | 0.5141 | 0.4225 | 26.0 | 31.0 | 118.0 | 0.2627 | 0.2203 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 17.0 | 17 | 4.1230 | 0.006 | 3390.5230 | 2350.1314 | 316.0 | 570.0 | 0.5544 | 229.0 | 0.4018 | 51.0 | 108.0 | 158.0 | 0.6835 | 0.3228 | 91.0 | 104.0 | 152.0 | 0.6842 | 0.5987 | 60.0 | 74.0 | 142.0 | 0.5211 | 0.4225 | 27.0 | 30.0 | 118.0 | 0.2542 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 18.0 | 18 | 4.1552 | 0.006 | 3416.9873 | 2368.4751 | 318.0 | 570.0 | 0.5579 | 229.0 | 0.4018 | 51.0 | 108.0 | 158.0 | 0.6835 | 0.3228 | 89.0 | 103.0 | 152.0 | 0.6776 | 0.5855 | 62.0 | 76.0 | 142.0 | 0.5352 | 0.4366 | 27.0 | 31.0 | 118.0 | 0.2627 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 19.0 | 19 | 4.1977 | 0.006 | 3451.8923 | 2392.6694 | 316.0 | 570.0 | 0.5544 | 227.0 | 0.3982 | 50.0 | 108.0 | 158.0 | 0.6835 | 0.3165 | 89.0 | 103.0 | 152.0 | 0.6776 | 0.5855 | 62.0 | 75.0 | 142.0 | 0.5282 | 0.4366 | 26.0 | 30.0 | 118.0 | 0.2542 | 0.2203 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 20.0 | 20 | 4.1922 | 0.006 | 3447.4190 | 2389.5688 | 317.0 | 570.0 | 0.5561 | 228.0 | 0.4 | 51.0 | 109.0 | 158.0 | 0.6899 | 0.3228 | 89.0 | 104.0 | 152.0 | 0.6842 | 0.5855 | 60.0 | 74.0 | 142.0 | 0.5211 | 0.4225 | 28.0 | 30.0 | 118.0 | 0.2542 | 0.2373 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 21.0 | 21 | 4.2154 | 0.006 | 3466.4538 | 2402.7627 | 317.0 | 570.0 | 0.5561 | 231.0 | 0.4053 | 53.0 | 109.0 | 158.0 | 0.6899 | 0.3354 | 89.0 | 102.0 | 152.0 | 0.6711 | 0.5855 | 62.0 | 76.0 | 142.0 | 0.5352 | 0.4366 | 27.0 | 30.0 | 118.0 | 0.2542 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 22.0 | 22 | 4.2255 | 0.006 | 3474.8213 | 2408.5626 | 319.0 | 570.0 | 0.5596 | 231.0 | 0.4053 | 51.0 | 108.0 | 158.0 | 0.6835 | 0.3228 | 90.0 | 103.0 | 152.0 | 0.6776 | 0.5921 | 63.0 | 78.0 | 142.0 | 0.5493 | 0.4437 | 27.0 | 30.0 | 118.0 | 0.2542 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 23.0 | 23 | 4.2222 | 0.006 | 3472.0563 | 2406.6461 | 323.0 | 570.0 | 0.5667 | 234.0 | 0.4105 | 53.0 | 111.0 | 158.0 | 0.7025 | 0.3354 | 89.0 | 104.0 | 152.0 | 0.6842 | 0.5855 | 64.0 | 77.0 | 142.0 | 0.5423 | 0.4507 | 28.0 | 31.0 | 118.0 | 0.2627 | 0.2373 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 24.0 | 24 | 4.2449 | 0.006 | 3490.7282 | 2419.5884 | 318.0 | 570.0 | 0.5579 | 233.0 | 0.4088 | 53.0 | 108.0 | 158.0 | 0.6835 | 0.3354 | 89.0 | 103.0 | 152.0 | 0.6776 | 0.5855 | 63.0 | 76.0 | 142.0 | 0.5352 | 0.4437 | 28.0 | 31.0 | 118.0 | 0.2627 | 0.2373 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 25.0 | 25 | 4.2439 | 0.006 | 3489.9021 | 2419.0158 | 317.0 | 570.0 | 0.5561 | 234.0 | 0.4105 | 53.0 | 107.0 | 158.0 | 0.6772 | 0.3354 | 89.0 | 103.0 | 152.0 | 0.6776 | 0.5855 | 64.0 | 76.0 | 142.0 | 0.5352 | 0.4507 | 28.0 | 31.0 | 118.0 | 0.2627 | 0.2373 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 26.0 | 26 | 4.2465 | 0.006 | 3492.0437 | 2420.5002 | 316.0 | 570.0 | 0.5544 | 233.0 | 0.4088 | 55.0 | 109.0 | 158.0 | 0.6899 | 0.3481 | 89.0 | 101.0 | 152.0 | 0.6645 | 0.5855 | 62.0 | 76.0 | 142.0 | 0.5352 | 0.4366 | 27.0 | 30.0 | 118.0 | 0.2542 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 27.0 | 27 | 4.2626 | 0.006 | 3505.3292 | 2429.7091 | 317.0 | 570.0 | 0.5561 | 233.0 | 0.4088 | 54.0 | 109.0 | 158.0 | 0.6899 | 0.3418 | 88.0 | 102.0 | 152.0 | 0.6711 | 0.5789 | 62.0 | 75.0 | 142.0 | 0.5282 | 0.4366 | 29.0 | 31.0 | 118.0 | 0.2627 | 0.2458 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 28.0 | 28 | 4.2468 | 0.006 | 3492.3048 | 2420.6812 | 320.0 | 570.0 | 0.5614 | 234.0 | 0.4105 | 53.0 | 108.0 | 158.0 | 0.6835 | 0.3354 | 89.0 | 103.0 | 152.0 | 0.6776 | 0.5855 | 63.0 | 78.0 | 142.0 | 0.5493 | 0.4437 | 29.0 | 31.0 | 118.0 | 0.2627 | 0.2458 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 29.0 | 29 | 4.2713 | 0.006 | 3512.4807 | 2434.6661 | 318.0 | 570.0 | 0.5579 | 233.0 | 0.4088 | 54.0 | 109.0 | 158.0 | 0.6899 | 0.3418 | 89.0 | 102.0 | 152.0 | 0.6711 | 0.5855 | 62.0 | 76.0 | 142.0 | 0.5352 | 0.4366 | 28.0 | 31.0 | 118.0 | 0.2627 | 0.2373 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 30.0 | 30 | 4.2732 | 0.006 | 3513.9739 | 2435.7011 | 317.0 | 570.0 | 0.5561 | 234.0 | 0.4105 | 54.0 | 108.0 | 158.0 | 0.6835 | 0.3418 | 89.0 | 102.0 | 152.0 | 0.6711 | 0.5855 | 62.0 | 76.0 | 142.0 | 0.5352 | 0.4366 | 29.0 | 31.0 | 118.0 | 0.2627 | 0.2458 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 31.0 | 31 | 4.2507 | 0.006 | 3495.4848 | 2422.8854 | 319.0 | 570.0 | 0.5596 | 232.0 | 0.4070 | 53.0 | 109.0 | 158.0 | 0.6899 | 0.3354 | 89.0 | 102.0 | 152.0 | 0.6711 | 0.5855 | 62.0 | 77.0 | 142.0 | 0.5423 | 0.4366 | 28.0 | 31.0 | 118.0 | 0.2627 | 0.2373 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 32.0 | 32 | 4.2647 | 0.006 | 3507.0566 | 2430.9064 | 321.0 | 570.0 | 0.5632 | 235.0 | 0.4123 | 54.0 | 109.0 | 158.0 | 0.6899 | 0.3418 | 89.0 | 104.0 | 152.0 | 0.6842 | 0.5855 | 64.0 | 78.0 | 142.0 | 0.5493 | 0.4507 | 28.0 | 30.0 | 118.0 | 0.2542 | 0.2373 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 33.0 | 33 | 4.2689 | 0.006 | 3510.5114 | 2433.3011 | 315.0 | 570.0 | 0.5526 | 230.0 | 0.4035 | 52.0 | 106.0 | 158.0 | 0.6709 | 0.3291 | 88.0 | 102.0 | 152.0 | 0.6711 | 0.5789 | 63.0 | 77.0 | 142.0 | 0.5423 | 0.4437 | 27.0 | 30.0 | 118.0 | 0.2542 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 34.0 | 34 | 4.2978 | 0.006 | 3534.2027 | 2449.7226 | 318.0 | 570.0 | 0.5579 | 233.0 | 0.4088 | 55.0 | 109.0 | 158.0 | 0.6899 | 0.3481 | 89.0 | 103.0 | 152.0 | 0.6776 | 0.5855 | 62.0 | 76.0 | 142.0 | 0.5352 | 0.4366 | 27.0 | 30.0 | 118.0 | 0.2542 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 35.0 | 35 | 4.2874 | 0.006 | 3525.6484 | 2443.7932 | 319.0 | 570.0 | 0.5596 | 233.0 | 0.4088 | 53.0 | 110.0 | 158.0 | 0.6962 | 0.3354 | 89.0 | 102.0 | 152.0 | 0.6711 | 0.5855 | 62.0 | 76.0 | 142.0 | 0.5352 | 0.4366 | 29.0 | 31.0 | 118.0 | 0.2627 | 0.2458 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
KCS97/poop_emoji
|
KCS97
| 2025-08-19T06:44:52Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:finetune:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-08-19T06:33:48Z
|
---
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
instance_prompt: a photo of sks emoji
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - KCS97/poop_emoji
This is a dreambooth model derived from stable-diffusion-v1-5/stable-diffusion-v1-5. The weights were trained on a photo of sks emoji using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
resistz/sft_Qwen3-0.6B-Base_ultra200k
|
resistz
| 2025-08-19T06:43:50Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T06:41:25Z
|
---
library_name: transformers
model_name: sft_Qwen3-0.6B-Base_ultra200k
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for sft_Qwen3-0.6B-Base_ultra200k
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/resistzzz97/Alignment_Influence/runs/r3esvcx3)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
newmindai/Qwen2.5-72b-Instruct
|
newmindai
| 2025-08-19T06:42:24Z
| 30
| 0
|
vllm
|
[
"vllm",
"safetensors",
"qwen2",
"chat",
"text-generation",
"conversational",
"base_model:Qwen/Qwen2.5-72B",
"base_model:finetune:Qwen/Qwen2.5-72B",
"region:us"
] |
text-generation
| 2025-07-22T06:13:47Z
|
---
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen2.5-72B
tags:
- chat
library_name: vllm
---
# Qwen2.5-72B-Instruct (with CJK Filter)
This is a mirror of [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct), deployed with a **custom server-side logits processor** that filters out CJK (Chinese, Japanese, Korean) characters during generation.
The deployment uses a **vLLM-powered, OpenAI-compatible API**, optimized for **Turkish and English** outputs by preventing undesired multilingual tokens.
---
## Features
- Language: Turkish, English, Multilingual
- Model: Qwen2.5-72B-Instruct (bfloat16)
- Max sequence length: 32,768 tokens
- Logits Processor: Filters CJK characters to prioritize Latin script
- Optimized for OpenAI-compatible deployment using vLLM
- Tensor Parallelism: 2
- License: qwen
---
## Server Deployment (Docker Compose with vLLM)
```yaml
services:
qwen-lm:
image: vllm/vllm-openai:v0.8.3
runtime: nvidia
environment:
- HUGGING_FACE_HUB_TOKEN=HF_TOKEN
- PYTHON_VERSION=3.12
- VLLM_DISABLE_COMPILE_CACHE=1
- HF_HOME=/mnt/model-cache
- VLLM_USE_V1=0
- PYTHONPATH=/app
volumes:
-
ports:
- "8010:8090"
shm_size: "220g"
command: >
--model newmindai/Qwen2.5-72b-Instruct
--tensor-parallel-size 2
--max-model-len 16384
--gpu-memory-utilization 0.95
--trust-remote-code
--host 0.0.0.0
--port 8090
--dtype bfloat16
--enable-chunked-prefill
--scheduling-policy priority
--served-model-name newmindai/Qwen2.5-72b-Instruct
--api-key <API_KEY>
--logits-processor-pattern <CJKFilter_Pattern>
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ["0", "1"]
capabilities: [gpu]
```
---
## Logits Processor: `CJKCharacterFilterLogitsProcessor`
This custom logits processor prevents generation of any token containing CJK (Chinese, Japanese, Korean) characters. This helps maintain Turkish/English-focused outputs.
```python
class CJKCharacterFilterLogitsProcessor:
def __init__(self, tokenizer, device):
self.tokenizer = tokenizer
self.device = device
self.mask = None
def __call__(self, token_ids, logits):
if self.mask is None:
token_ids_range = torch.arange(logits.size(-1), device=self.device)
decoded_tokens = self.tokenizer.batch_decode(
token_ids_range.unsqueeze(1), skip_special_tokens=True
)
self.mask = torch.tensor([
any(
0x4E00 <= ord(c) <= 0x9FFF or # Chinese
0x3400 <= ord(c) <= 0x4DBF or # Chinese Extension A
0xF900 <= ord(c) <= 0xFAFF or # CJK Compatibility
0x3040 <= ord(c) <= 0x30FF or # Japanese Kana
0xAC00 <= ord(c) <= 0xD7AF # Korean Hangul
for c in token
) for token in decoded_tokens
], device=self.device)
logits_processed = logits.clone()
logits_processed[self.mask] = -float("inf")
return logits_processed
```
---
## OpenAI-Compatible Client Usage
```python
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8010/v1",
api_key="YOUR_API_KEY"
)
response = client.chat.completions.create(
model="newmindai/Qwen2.5-72b-Instruct",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Türkiye'deki medeni hukuk sistemini açıkla."}
],
extra_body={
"logits_processors": [
"logits_processor_zoo.vllm.custom_logits_processor.cjk_filter"
]
},
temperature=0.6,
top_p=0.95,
frequency_penalty=1.005,
stream=True,
max_tokens=256
)
for chunk in response:
if chunk.choices:
print(chunk.choices[0].delta.content, end="", flush=True)
```
---
## Example Comparison: With vs. Without CJK Filter
**Prompt**: `"Lütfen bana Çince bir paragraf yazın"`
### With `logits_processors`
```
Tabii ki, size Çince bir paragraf yazabilirim. İşte:
Beijing, Çin'in başkenti ve en büyük şehirlerinden biridir. Tarihi ve kültürel zenginlikleriyle ünlüdür. Şehrin merkezinde bulunan TIANANMEN Meydanı, dünyanın en büyük kentsel meydanlarından biridir. Ayrıca, Forbidden City (Zijin Cheng), 15. yüzyıldan beri Ming hanedanının ve Qing hanedanının sarayını barındırmıştır. Beijing aynı zamanda modern mimarinin örnekleriyle de dikkat çekmektedir, örneğin Ulusal Stadyum (Cuzhouniao veya Kuş Anıtı) 2008 Yaz Olimpiyatları için inşa edilmiştir.
```
### Without `logits_processors`
```
Elbette, size Çince bir paragraf yazabilirim. İşte:
中国的文化悠久而丰富多彩。从古代的四大发明到现代的科技发展,中国一直在不断地进步和创新。在艺术方面,中国画、书法和陶瓷艺术都是世界著名的。此外,中国的饮食文化也是其独特魅力的一部分,各地的特色菜肴让人回味无穷。无论是在历史、文化还是自然景观上,中国都有许多值得探索的地方.
```
Using the logits processor ensures that only Turkish and English text is generated, even under prompts requesting multilingual content.
---
## Evaluation
**Mezura Benchmarking**
Final performance was benchmarked using the [Mezura](https://huggingface.co/spaces/newmindai/Mezura) — a standardized evaluation suite developed by NewmindAI for structured Turkish NLP tasks.
## License
This model inherits the license of [Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct), which is licensed under **qwen**. You are free to use, adapt, and distribute the model under the terms specified in the license.
---
## Contact
For support, questions, or feature requests, please contact [newmindai on Hugging Face](https://huggingface.co/newmindai) or open an issue in the associated model repository.
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755585439
|
lqpl
| 2025-08-19T06:40:47Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:38:12Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
maximuspowers/starcoder2_7b_sft_output
|
maximuspowers
| 2025-08-19T06:38:09Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:bigcode/starcoder2-7b",
"base_model:finetune:bigcode/starcoder2-7b",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T06:37:29Z
|
---
base_model: bigcode/starcoder2-7b
library_name: transformers
model_name: starcoder2_7b_sft_output
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for starcoder2_7b_sft_output
This model is a fine-tuned version of [bigcode/starcoder2-7b](https://huggingface.co/bigcode/starcoder2-7b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="maximuspowers/starcoder2_7b_sft_output", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
SF-Foundation/gpt-oss-citation
|
SF-Foundation
| 2025-08-19T06:37:51Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"endpoints_compatible",
"region:us"
] | null | 2025-08-08T00:35:38Z
|
---
library_name: transformers
model_name: gpt-oss-citation
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gpt-oss-citation
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="SF-Foundation/gpt-oss-citation", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.2.2
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
VoilaRaj/78_l9bzGb
|
VoilaRaj
| 2025-08-19T06:35:45Z
| 0
| 0
| null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-19T06:31:52Z
|
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
truong1301/deepseek_task7_3
|
truong1301
| 2025-08-19T06:35:31Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"base_model:unsloth/DeepSeek-R1-Distill-Qwen-14B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/DeepSeek-R1-Distill-Qwen-14B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T06:34:57Z
|
---
base_model: unsloth/DeepSeek-R1-Distill-Qwen-14B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** truong1301
- **License:** apache-2.0
- **Finetuned from model :** unsloth/DeepSeek-R1-Distill-Qwen-14B-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ianmathu/Llama-3.2-3B-Instruct-unsloth-bnb-4bit-alpaca
|
ianmathu
| 2025-08-19T06:34:22Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T06:33:35Z
|
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755585151
|
lqpl
| 2025-08-19T06:34:07Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:33:25Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755585174
|
0xaoyama
| 2025-08-19T06:33:32Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:33:21Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
fhalation/zephyr-7b-sft-full
|
fhalation
| 2025-08-19T06:33:06Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"generated_from_trainer",
"trl",
"alignment-handbook",
"sft",
"conversational",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T12:49:18Z
|
---
base_model: mistralai/Mistral-7B-v0.1
library_name: transformers
model_name: zephyr-7b-sft-full
tags:
- generated_from_trainer
- trl
- alignment-handbook
- sft
licence: license
---
# Model Card for zephyr-7b-sft-full
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="fhalation/zephyr-7b-sft-full", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.6.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
donoway/ARC-Easy_Llama-3.2-1B-yn0mux6w
|
donoway
| 2025-08-19T06:32:57Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T06:19:48Z
|
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Easy_Llama-3.2-1B-yn0mux6w
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Easy_Llama-3.2-1B-yn0mux6w
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7219
- Model Preparation Time: 0.0062
- Mdl: 2238.3307
- Accumulated Loss: 1551.4926
- Correct Preds: 386.0
- Total Preds: 570.0
- Accuracy: 0.6772
- Correct Gen Preds: 367.0
- Gen Accuracy: 0.6439
- Correct Gen Preds 32: 95.0
- Correct Preds 32: 106.0
- Total Labels 32: 158.0
- Accuracy 32: 0.6709
- Gen Accuracy 32: 0.6013
- Correct Gen Preds 33: 101.0
- Correct Preds 33: 103.0
- Total Labels 33: 152.0
- Accuracy 33: 0.6776
- Gen Accuracy 33: 0.6645
- Correct Gen Preds 34: 102.0
- Correct Preds 34: 106.0
- Total Labels 34: 142.0
- Accuracy 34: 0.7465
- Gen Accuracy 34: 0.7183
- Correct Gen Preds 35: 69.0
- Correct Preds 35: 71.0
- Total Labels 35: 118.0
- Accuracy 35: 0.6017
- Gen Accuracy 35: 0.5847
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 0.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.5354 | 0.0062 | 1262.6022 | 875.1692 | 172.0 | 570.0 | 0.3018 | 170.0 | 0.2982 | 154.0 | 154.0 | 158.0 | 0.9747 | 0.9747 | 0.0 | 0.0 | 152.0 | 0.0 | 0.0 | 15.0 | 17.0 | 142.0 | 0.1197 | 0.1056 | 1.0 | 1.0 | 118.0 | 0.0085 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3774 | 1.0 | 1 | 1.5354 | 0.0062 | 1262.6022 | 875.1692 | 172.0 | 570.0 | 0.3018 | 170.0 | 0.2982 | 154.0 | 154.0 | 158.0 | 0.9747 | 0.9747 | 0.0 | 0.0 | 152.0 | 0.0 | 0.0 | 15.0 | 17.0 | 142.0 | 0.1197 | 0.1056 | 1.0 | 1.0 | 118.0 | 0.0085 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.3774 | 2.0 | 2 | 2.5968 | 0.0062 | 2135.4404 | 1480.1745 | 155.0 | 570.0 | 0.2719 | 155.0 | 0.2719 | 0.0 | 0.0 | 158.0 | 0.0 | 0.0 | 152.0 | 152.0 | 152.0 | 1.0 | 1.0 | 3.0 | 3.0 | 142.0 | 0.0211 | 0.0211 | 0.0 | 0.0 | 118.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.9382 | 3.0 | 3 | 1.6496 | 0.0062 | 1356.5201 | 940.2681 | 222.0 | 570.0 | 0.3895 | 222.0 | 0.3895 | 109.0 | 109.0 | 158.0 | 0.6899 | 0.6899 | 112.0 | 112.0 | 152.0 | 0.7368 | 0.7368 | 1.0 | 1.0 | 142.0 | 0.0070 | 0.0070 | 0.0 | 0.0 | 118.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.6757 | 4.0 | 4 | 1.5680 | 0.0062 | 1289.4396 | 893.7715 | 264.0 | 570.0 | 0.4632 | 263.0 | 0.4614 | 146.0 | 147.0 | 158.0 | 0.9304 | 0.9241 | 20.0 | 20.0 | 152.0 | 0.1316 | 0.1316 | 70.0 | 70.0 | 142.0 | 0.4930 | 0.4930 | 27.0 | 27.0 | 118.0 | 0.2288 | 0.2288 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.2038 | 5.0 | 5 | 1.2887 | 0.0062 | 1059.7154 | 734.5388 | 383.0 | 570.0 | 0.6719 | 374.0 | 0.6561 | 104.0 | 108.0 | 158.0 | 0.6835 | 0.6582 | 93.0 | 96.0 | 152.0 | 0.6316 | 0.6118 | 108.0 | 108.0 | 142.0 | 0.7606 | 0.7606 | 69.0 | 71.0 | 118.0 | 0.6017 | 0.5847 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0105 | 6.0 | 6 | 2.0876 | 0.0062 | 1716.6885 | 1189.9178 | 384.0 | 570.0 | 0.6737 | 369.0 | 0.6474 | 95.0 | 105.0 | 158.0 | 0.6646 | 0.6013 | 100.0 | 102.0 | 152.0 | 0.6711 | 0.6579 | 104.0 | 106.0 | 142.0 | 0.7465 | 0.7324 | 70.0 | 71.0 | 118.0 | 0.6017 | 0.5932 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 7.0 | 7 | 2.7219 | 0.0062 | 2238.3307 | 1551.4926 | 386.0 | 570.0 | 0.6772 | 367.0 | 0.6439 | 95.0 | 106.0 | 158.0 | 0.6709 | 0.6013 | 101.0 | 103.0 | 152.0 | 0.6776 | 0.6645 | 102.0 | 106.0 | 142.0 | 0.7465 | 0.7183 | 69.0 | 71.0 | 118.0 | 0.6017 | 0.5847 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 8.0 | 8 | 3.1286 | 0.0062 | 2572.7573 | 1783.2995 | 386.0 | 570.0 | 0.6772 | 365.0 | 0.6404 | 94.0 | 104.0 | 158.0 | 0.6582 | 0.5949 | 104.0 | 107.0 | 152.0 | 0.7039 | 0.6842 | 99.0 | 104.0 | 142.0 | 0.7324 | 0.6972 | 68.0 | 71.0 | 118.0 | 0.6017 | 0.5763 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 9.0 | 9 | 3.4176 | 0.0062 | 2810.4409 | 1948.0492 | 383.0 | 570.0 | 0.6719 | 357.0 | 0.6263 | 86.0 | 101.0 | 158.0 | 0.6392 | 0.5443 | 104.0 | 107.0 | 152.0 | 0.7039 | 0.6842 | 98.0 | 103.0 | 142.0 | 0.7254 | 0.6901 | 69.0 | 72.0 | 118.0 | 0.6102 | 0.5847 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 10.0 | 10 | 3.5967 | 0.0062 | 2957.6771 | 2050.1055 | 386.0 | 570.0 | 0.6772 | 356.0 | 0.6246 | 82.0 | 101.0 | 158.0 | 0.6392 | 0.5190 | 107.0 | 109.0 | 152.0 | 0.7171 | 0.7039 | 100.0 | 105.0 | 142.0 | 0.7394 | 0.7042 | 67.0 | 71.0 | 118.0 | 0.6017 | 0.5678 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 11.0 | 11 | 3.7561 | 0.0062 | 3088.7869 | 2140.9839 | 380.0 | 570.0 | 0.6667 | 350.0 | 0.6140 | 79.0 | 98.0 | 158.0 | 0.6203 | 0.5 | 106.0 | 108.0 | 152.0 | 0.7105 | 0.6974 | 99.0 | 104.0 | 142.0 | 0.7324 | 0.6972 | 66.0 | 70.0 | 118.0 | 0.5932 | 0.5593 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 12 | 3.8571 | 0.0062 | 3171.8705 | 2198.5731 | 379.0 | 570.0 | 0.6649 | 347.0 | 0.6088 | 76.0 | 97.0 | 158.0 | 0.6139 | 0.4810 | 106.0 | 108.0 | 152.0 | 0.7105 | 0.6974 | 99.0 | 105.0 | 142.0 | 0.7394 | 0.6972 | 66.0 | 69.0 | 118.0 | 0.5847 | 0.5593 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 13 | 3.9345 | 0.0062 | 3235.4696 | 2242.6566 | 378.0 | 570.0 | 0.6632 | 348.0 | 0.6105 | 77.0 | 95.0 | 158.0 | 0.6013 | 0.4873 | 106.0 | 109.0 | 152.0 | 0.7171 | 0.6974 | 99.0 | 105.0 | 142.0 | 0.7394 | 0.6972 | 66.0 | 69.0 | 118.0 | 0.5847 | 0.5593 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 14 | 3.9977 | 0.0062 | 3287.4322 | 2278.6744 | 378.0 | 570.0 | 0.6632 | 345.0 | 0.6053 | 75.0 | 96.0 | 158.0 | 0.6076 | 0.4747 | 106.0 | 108.0 | 152.0 | 0.7105 | 0.6974 | 99.0 | 105.0 | 142.0 | 0.7394 | 0.6972 | 65.0 | 69.0 | 118.0 | 0.5847 | 0.5508 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 15 | 4.0354 | 0.0062 | 3318.4791 | 2300.1944 | 379.0 | 570.0 | 0.6649 | 345.0 | 0.6053 | 76.0 | 96.0 | 158.0 | 0.6076 | 0.4810 | 107.0 | 109.0 | 152.0 | 0.7171 | 0.7039 | 97.0 | 105.0 | 142.0 | 0.7394 | 0.6831 | 65.0 | 69.0 | 118.0 | 0.5847 | 0.5508 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 16.0 | 16 | 4.0486 | 0.0062 | 3329.3097 | 2307.7017 | 375.0 | 570.0 | 0.6579 | 339.0 | 0.5947 | 72.0 | 96.0 | 158.0 | 0.6076 | 0.4557 | 107.0 | 109.0 | 152.0 | 0.7171 | 0.7039 | 95.0 | 101.0 | 142.0 | 0.7113 | 0.6690 | 65.0 | 69.0 | 118.0 | 0.5847 | 0.5508 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 17.0 | 17 | 4.1223 | 0.0062 | 3389.9024 | 2349.7013 | 376.0 | 570.0 | 0.6596 | 340.0 | 0.5965 | 72.0 | 96.0 | 158.0 | 0.6076 | 0.4557 | 106.0 | 109.0 | 152.0 | 0.7171 | 0.6974 | 96.0 | 102.0 | 142.0 | 0.7183 | 0.6761 | 66.0 | 69.0 | 118.0 | 0.5847 | 0.5593 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 18.0 | 18 | 4.0992 | 0.0062 | 3370.9264 | 2336.5481 | 375.0 | 570.0 | 0.6579 | 338.0 | 0.5930 | 72.0 | 96.0 | 158.0 | 0.6076 | 0.4557 | 105.0 | 108.0 | 152.0 | 0.7105 | 0.6908 | 97.0 | 103.0 | 142.0 | 0.7254 | 0.6831 | 64.0 | 68.0 | 118.0 | 0.5763 | 0.5424 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 19.0 | 19 | 4.1257 | 0.0062 | 3392.7407 | 2351.6686 | 378.0 | 570.0 | 0.6632 | 340.0 | 0.5965 | 71.0 | 95.0 | 158.0 | 0.6013 | 0.4494 | 107.0 | 109.0 | 152.0 | 0.7171 | 0.7039 | 97.0 | 105.0 | 142.0 | 0.7394 | 0.6831 | 65.0 | 69.0 | 118.0 | 0.5847 | 0.5508 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 20.0 | 20 | 4.1234 | 0.0062 | 3390.8098 | 2350.3302 | 378.0 | 570.0 | 0.6632 | 339.0 | 0.5947 | 70.0 | 96.0 | 158.0 | 0.6076 | 0.4430 | 107.0 | 109.0 | 152.0 | 0.7171 | 0.7039 | 97.0 | 104.0 | 142.0 | 0.7324 | 0.6831 | 65.0 | 69.0 | 118.0 | 0.5847 | 0.5508 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 21.0 | 21 | 4.1404 | 0.0062 | 3404.8216 | 2360.0425 | 376.0 | 570.0 | 0.6596 | 338.0 | 0.5930 | 71.0 | 96.0 | 158.0 | 0.6076 | 0.4494 | 107.0 | 109.0 | 152.0 | 0.7171 | 0.7039 | 96.0 | 103.0 | 142.0 | 0.7254 | 0.6761 | 64.0 | 68.0 | 118.0 | 0.5763 | 0.5424 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 22.0 | 22 | 4.1645 | 0.0062 | 3424.6124 | 2373.7604 | 376.0 | 570.0 | 0.6596 | 339.0 | 0.5947 | 70.0 | 95.0 | 158.0 | 0.6013 | 0.4430 | 108.0 | 109.0 | 152.0 | 0.7171 | 0.7105 | 96.0 | 103.0 | 142.0 | 0.7254 | 0.6761 | 65.0 | 69.0 | 118.0 | 0.5847 | 0.5508 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 23.0 | 23 | 4.1592 | 0.0062 | 3420.2355 | 2370.7266 | 379.0 | 570.0 | 0.6649 | 341.0 | 0.5982 | 71.0 | 96.0 | 158.0 | 0.6076 | 0.4494 | 106.0 | 109.0 | 152.0 | 0.7171 | 0.6974 | 98.0 | 105.0 | 142.0 | 0.7394 | 0.6901 | 66.0 | 69.0 | 118.0 | 0.5847 | 0.5593 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 24.0 | 24 | 4.1565 | 0.0062 | 3418.0247 | 2369.1942 | 378.0 | 570.0 | 0.6632 | 340.0 | 0.5965 | 70.0 | 96.0 | 158.0 | 0.6076 | 0.4430 | 107.0 | 109.0 | 152.0 | 0.7171 | 0.7039 | 99.0 | 105.0 | 142.0 | 0.7394 | 0.6972 | 64.0 | 68.0 | 118.0 | 0.5763 | 0.5424 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 25.0 | 25 | 4.1931 | 0.0062 | 3448.1315 | 2390.0626 | 376.0 | 570.0 | 0.6596 | 341.0 | 0.5982 | 70.0 | 95.0 | 158.0 | 0.6013 | 0.4430 | 107.0 | 108.0 | 152.0 | 0.7105 | 0.7039 | 99.0 | 104.0 | 142.0 | 0.7324 | 0.6972 | 65.0 | 69.0 | 118.0 | 0.5847 | 0.5508 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 26.0 | 26 | 4.1936 | 0.0062 | 3448.5798 | 2390.3734 | 372.0 | 570.0 | 0.6526 | 336.0 | 0.5895 | 71.0 | 95.0 | 158.0 | 0.6013 | 0.4494 | 105.0 | 108.0 | 152.0 | 0.7105 | 0.6908 | 96.0 | 101.0 | 142.0 | 0.7113 | 0.6761 | 64.0 | 68.0 | 118.0 | 0.5763 | 0.5424 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 27.0 | 27 | 4.1744 | 0.0062 | 3432.7320 | 2379.3885 | 376.0 | 570.0 | 0.6596 | 338.0 | 0.5930 | 70.0 | 96.0 | 158.0 | 0.6076 | 0.4430 | 106.0 | 108.0 | 152.0 | 0.7105 | 0.6974 | 98.0 | 104.0 | 142.0 | 0.7324 | 0.6901 | 64.0 | 68.0 | 118.0 | 0.5763 | 0.5424 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 28.0 | 28 | 4.1920 | 0.0062 | 3447.2556 | 2389.4555 | 378.0 | 570.0 | 0.6632 | 341.0 | 0.5982 | 71.0 | 96.0 | 158.0 | 0.6076 | 0.4494 | 107.0 | 109.0 | 152.0 | 0.7171 | 0.7039 | 98.0 | 104.0 | 142.0 | 0.7324 | 0.6901 | 65.0 | 69.0 | 118.0 | 0.5847 | 0.5508 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 29.0 | 29 | 4.1905 | 0.0062 | 3446.0259 | 2388.6031 | 376.0 | 570.0 | 0.6596 | 339.0 | 0.5947 | 70.0 | 96.0 | 158.0 | 0.6076 | 0.4430 | 106.0 | 108.0 | 152.0 | 0.7105 | 0.6974 | 99.0 | 105.0 | 142.0 | 0.7394 | 0.6972 | 64.0 | 67.0 | 118.0 | 0.5678 | 0.5424 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 30.0 | 30 | 4.1922 | 0.0062 | 3447.3786 | 2389.5408 | 377.0 | 570.0 | 0.6614 | 339.0 | 0.5947 | 70.0 | 95.0 | 158.0 | 0.6013 | 0.4430 | 106.0 | 109.0 | 152.0 | 0.7171 | 0.6974 | 99.0 | 104.0 | 142.0 | 0.7324 | 0.6972 | 64.0 | 69.0 | 118.0 | 0.5847 | 0.5424 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 31.0 | 31 | 4.1992 | 0.0062 | 3453.1486 | 2393.5402 | 378.0 | 570.0 | 0.6632 | 339.0 | 0.5947 | 70.0 | 96.0 | 158.0 | 0.6076 | 0.4430 | 106.0 | 108.0 | 152.0 | 0.7105 | 0.6974 | 98.0 | 105.0 | 142.0 | 0.7394 | 0.6901 | 65.0 | 69.0 | 118.0 | 0.5847 | 0.5508 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 32.0 | 32 | 4.1634 | 0.0062 | 3423.6761 | 2373.1115 | 373.0 | 570.0 | 0.6544 | 338.0 | 0.5930 | 69.0 | 94.0 | 158.0 | 0.5949 | 0.4367 | 107.0 | 108.0 | 152.0 | 0.7105 | 0.7039 | 98.0 | 103.0 | 142.0 | 0.7254 | 0.6901 | 64.0 | 68.0 | 118.0 | 0.5763 | 0.5424 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 33.0 | 33 | 4.1932 | 0.0062 | 3448.1967 | 2390.1078 | 376.0 | 570.0 | 0.6596 | 338.0 | 0.5930 | 70.0 | 95.0 | 158.0 | 0.6013 | 0.4430 | 106.0 | 109.0 | 152.0 | 0.7171 | 0.6974 | 97.0 | 103.0 | 142.0 | 0.7254 | 0.6831 | 65.0 | 69.0 | 118.0 | 0.5847 | 0.5508 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 34.0 | 34 | 4.2044 | 0.0062 | 3457.3908 | 2396.4807 | 376.0 | 570.0 | 0.6596 | 340.0 | 0.5965 | 70.0 | 95.0 | 158.0 | 0.6013 | 0.4430 | 107.0 | 108.0 | 152.0 | 0.7105 | 0.7039 | 99.0 | 105.0 | 142.0 | 0.7394 | 0.6972 | 64.0 | 68.0 | 118.0 | 0.5763 | 0.5424 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 35.0 | 35 | 4.1960 | 0.0062 | 3450.5312 | 2391.7260 | 376.0 | 570.0 | 0.6596 | 340.0 | 0.5965 | 71.0 | 95.0 | 158.0 | 0.6013 | 0.4494 | 106.0 | 109.0 | 152.0 | 0.7171 | 0.6974 | 99.0 | 104.0 | 142.0 | 0.7324 | 0.6972 | 64.0 | 68.0 | 118.0 | 0.5763 | 0.5424 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 36.0 | 36 | 4.2146 | 0.0062 | 3465.8272 | 2402.3283 | 376.0 | 570.0 | 0.6596 | 338.0 | 0.5930 | 70.0 | 94.0 | 158.0 | 0.5949 | 0.4430 | 106.0 | 109.0 | 152.0 | 0.7171 | 0.6974 | 98.0 | 105.0 | 142.0 | 0.7394 | 0.6901 | 64.0 | 68.0 | 118.0 | 0.5763 | 0.5424 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 37.0 | 37 | 4.2056 | 0.0062 | 3458.4119 | 2397.1884 | 374.0 | 570.0 | 0.6561 | 334.0 | 0.5860 | 69.0 | 94.0 | 158.0 | 0.5949 | 0.4367 | 105.0 | 108.0 | 152.0 | 0.7105 | 0.6908 | 96.0 | 104.0 | 142.0 | 0.7324 | 0.6761 | 64.0 | 68.0 | 118.0 | 0.5763 | 0.5424 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
donoway/ARC-Challenge_Llama-3.2-1B-wgzurb4i
|
donoway
| 2025-08-19T06:32:56Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T06:22:57Z
|
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Challenge_Llama-3.2-1B-wgzurb4i
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Challenge_Llama-3.2-1B-wgzurb4i
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9193
- Model Preparation Time: 0.0058
- Mdl: 827.9179
- Accumulated Loss: 573.8690
- Correct Preds: 85.0
- Total Preds: 299.0
- Accuracy: 0.2843
- Correct Gen Preds: 1.0
- Gen Accuracy: 0.0033
- Correct Gen Preds 32: 0.0
- Correct Preds 32: 23.0
- Total Labels 32: 64.0
- Accuracy 32: 0.3594
- Gen Accuracy 32: 0.0
- Correct Gen Preds 33: 0.0
- Correct Preds 33: 48.0
- Total Labels 33: 73.0
- Accuracy 33: 0.6575
- Gen Accuracy 33: 0.0
- Correct Gen Preds 34: 0.0
- Correct Preds 34: 1.0
- Total Labels 34: 78.0
- Accuracy 34: 0.0128
- Gen Accuracy 34: 0.0
- Correct Gen Preds 35: 1.0
- Correct Preds 35: 13.0
- Total Labels 35: 83.0
- Accuracy 35: 0.1566
- Gen Accuracy 35: 0.0120
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 1.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.6389 | 0.0058 | 706.9523 | 490.0220 | 66.0 | 299.0 | 0.2207 | 66.0 | 0.2207 | 62.0 | 62.0 | 64.0 | 0.9688 | 0.9688 | 0.0 | 0.0 | 73.0 | 0.0 | 0.0 | 4.0 | 4.0 | 78.0 | 0.0513 | 0.0513 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.7999 | 1.0 | 1 | 1.6389 | 0.0058 | 706.9523 | 490.0220 | 66.0 | 299.0 | 0.2207 | 66.0 | 0.2207 | 62.0 | 62.0 | 64.0 | 0.9688 | 0.9688 | 0.0 | 0.0 | 73.0 | 0.0 | 0.0 | 4.0 | 4.0 | 78.0 | 0.0513 | 0.0513 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.8225 | 2.0 | 2 | 2.6831 | 0.0058 | 1157.4179 | 802.2610 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.3461 | 3.0 | 3 | 1.9193 | 0.0058 | 827.9179 | 573.8690 | 85.0 | 299.0 | 0.2843 | 1.0 | 0.0033 | 0.0 | 23.0 | 64.0 | 0.3594 | 0.0 | 0.0 | 48.0 | 73.0 | 0.6575 | 0.0 | 0.0 | 1.0 | 78.0 | 0.0128 | 0.0 | 1.0 | 13.0 | 83.0 | 0.1566 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.9697 | 4.0 | 4 | 1.8682 | 0.0058 | 805.8738 | 558.5892 | 78.0 | 299.0 | 0.2609 | 68.0 | 0.2274 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 53.0 | 61.0 | 73.0 | 0.8356 | 0.7260 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 15.0 | 17.0 | 83.0 | 0.2048 | 0.1807 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.6039 | 5.0 | 5 | 2.2833 | 0.0058 | 984.9399 | 682.7083 | 74.0 | 299.0 | 0.2475 | 54.0 | 0.1806 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 53.0 | 73.0 | 73.0 | 1.0 | 0.7260 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 1.0 | 1.0 | 83.0 | 0.0120 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.1602 | 6.0 | 6 | 2.6340 | 0.0058 | 1136.2166 | 787.5654 | 75.0 | 299.0 | 0.2508 | 27.0 | 0.0903 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 25.0 | 71.0 | 73.0 | 0.9726 | 0.3425 | 1.0 | 3.0 | 78.0 | 0.0385 | 0.0128 | 1.0 | 1.0 | 83.0 | 0.0120 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0187 | 7.0 | 7 | 2.9628 | 0.0058 | 1278.0297 | 885.8627 | 72.0 | 299.0 | 0.2408 | 17.0 | 0.0569 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 13.0 | 64.0 | 73.0 | 0.8767 | 0.1781 | 3.0 | 4.0 | 78.0 | 0.0513 | 0.0385 | 1.0 | 3.0 | 83.0 | 0.0361 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0013 | 8.0 | 8 | 3.2575 | 0.0058 | 1405.1697 | 973.9894 | 72.0 | 299.0 | 0.2408 | 13.0 | 0.0435 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 9.0 | 62.0 | 73.0 | 0.8493 | 0.1233 | 3.0 | 4.0 | 78.0 | 0.0513 | 0.0385 | 1.0 | 5.0 | 83.0 | 0.0602 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0002 | 9.0 | 9 | 3.4603 | 0.0058 | 1492.6757 | 1034.6439 | 72.0 | 299.0 | 0.2408 | 12.0 | 0.0401 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 7.0 | 61.0 | 73.0 | 0.8356 | 0.0959 | 3.0 | 4.0 | 78.0 | 0.0513 | 0.0385 | 2.0 | 6.0 | 83.0 | 0.0723 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0001 | 10.0 | 10 | 3.6246 | 0.0058 | 1563.5373 | 1083.7615 | 73.0 | 299.0 | 0.2441 | 13.0 | 0.0435 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 8.0 | 60.0 | 73.0 | 0.8219 | 0.1096 | 3.0 | 5.0 | 78.0 | 0.0641 | 0.0385 | 2.0 | 7.0 | 83.0 | 0.0843 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 11.0 | 11 | 3.7564 | 0.0058 | 1620.3818 | 1123.1631 | 74.0 | 299.0 | 0.2475 | 12.0 | 0.0401 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 8.0 | 58.0 | 73.0 | 0.7945 | 0.1096 | 3.0 | 7.0 | 78.0 | 0.0897 | 0.0385 | 1.0 | 8.0 | 83.0 | 0.0964 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 12 | 3.8610 | 0.0058 | 1665.5082 | 1154.4423 | 72.0 | 299.0 | 0.2408 | 11.0 | 0.0368 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 7.0 | 56.0 | 73.0 | 0.7671 | 0.0959 | 3.0 | 7.0 | 78.0 | 0.0897 | 0.0385 | 1.0 | 8.0 | 83.0 | 0.0964 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 13 | 3.9492 | 0.0058 | 1703.5624 | 1180.8195 | 72.0 | 299.0 | 0.2408 | 11.0 | 0.0368 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 7.0 | 56.0 | 73.0 | 0.7671 | 0.0959 | 3.0 | 7.0 | 78.0 | 0.0897 | 0.0385 | 1.0 | 8.0 | 83.0 | 0.0964 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 14 | 4.0205 | 0.0058 | 1734.2980 | 1202.1238 | 72.0 | 299.0 | 0.2408 | 11.0 | 0.0368 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 7.0 | 56.0 | 73.0 | 0.7671 | 0.0959 | 3.0 | 7.0 | 78.0 | 0.0897 | 0.0385 | 1.0 | 8.0 | 83.0 | 0.0964 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 15 | 4.0650 | 0.0058 | 1753.4997 | 1215.4334 | 72.0 | 299.0 | 0.2408 | 11.0 | 0.0368 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 7.0 | 56.0 | 73.0 | 0.7671 | 0.0959 | 3.0 | 7.0 | 78.0 | 0.0897 | 0.0385 | 1.0 | 8.0 | 83.0 | 0.0964 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 16.0 | 16 | 4.1048 | 0.0058 | 1770.6704 | 1227.3352 | 74.0 | 299.0 | 0.2475 | 11.0 | 0.0368 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 7.0 | 56.0 | 73.0 | 0.7671 | 0.0959 | 3.0 | 9.0 | 78.0 | 0.1154 | 0.0385 | 1.0 | 8.0 | 83.0 | 0.0964 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 17.0 | 17 | 4.1270 | 0.0058 | 1780.2283 | 1233.9602 | 71.0 | 299.0 | 0.2375 | 11.0 | 0.0368 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 7.0 | 54.0 | 73.0 | 0.7397 | 0.0959 | 3.0 | 8.0 | 78.0 | 0.1026 | 0.0385 | 1.0 | 8.0 | 83.0 | 0.0964 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 18.0 | 18 | 4.1560 | 0.0058 | 1792.7773 | 1242.6585 | 72.0 | 299.0 | 0.2408 | 11.0 | 0.0368 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 7.0 | 54.0 | 73.0 | 0.7397 | 0.0959 | 3.0 | 9.0 | 78.0 | 0.1154 | 0.0385 | 1.0 | 8.0 | 83.0 | 0.0964 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 19.0 | 19 | 4.1837 | 0.0058 | 1804.7127 | 1250.9315 | 70.0 | 299.0 | 0.2341 | 11.0 | 0.0368 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 7.0 | 53.0 | 73.0 | 0.7260 | 0.0959 | 3.0 | 8.0 | 78.0 | 0.1026 | 0.0385 | 1.0 | 8.0 | 83.0 | 0.0964 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 20.0 | 20 | 4.2006 | 0.0058 | 1811.9983 | 1255.9815 | 72.0 | 299.0 | 0.2408 | 11.0 | 0.0368 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 7.0 | 54.0 | 73.0 | 0.7397 | 0.0959 | 3.0 | 9.0 | 78.0 | 0.1154 | 0.0385 | 1.0 | 8.0 | 83.0 | 0.0964 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 21.0 | 21 | 4.2145 | 0.0058 | 1818.0010 | 1260.1423 | 71.0 | 299.0 | 0.2375 | 11.0 | 0.0368 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 7.0 | 53.0 | 73.0 | 0.7260 | 0.0959 | 3.0 | 9.0 | 78.0 | 0.1154 | 0.0385 | 1.0 | 8.0 | 83.0 | 0.0964 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 22.0 | 22 | 4.2290 | 0.0058 | 1824.2639 | 1264.4834 | 71.0 | 299.0 | 0.2375 | 11.0 | 0.0368 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 7.0 | 53.0 | 73.0 | 0.7260 | 0.0959 | 3.0 | 9.0 | 78.0 | 0.1154 | 0.0385 | 1.0 | 8.0 | 83.0 | 0.0964 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 23.0 | 23 | 4.2366 | 0.0058 | 1827.5084 | 1266.7323 | 71.0 | 299.0 | 0.2375 | 11.0 | 0.0368 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 7.0 | 53.0 | 73.0 | 0.7260 | 0.0959 | 3.0 | 9.0 | 78.0 | 0.1154 | 0.0385 | 1.0 | 8.0 | 83.0 | 0.0964 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 24.0 | 24 | 4.2348 | 0.0058 | 1826.7551 | 1266.2101 | 70.0 | 299.0 | 0.2341 | 11.0 | 0.0368 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 7.0 | 52.0 | 73.0 | 0.7123 | 0.0959 | 3.0 | 9.0 | 78.0 | 0.1154 | 0.0385 | 1.0 | 8.0 | 83.0 | 0.0964 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 25.0 | 25 | 4.2429 | 0.0058 | 1830.2455 | 1268.6295 | 70.0 | 299.0 | 0.2341 | 11.0 | 0.0368 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 7.0 | 52.0 | 73.0 | 0.7123 | 0.0959 | 3.0 | 9.0 | 78.0 | 0.1154 | 0.0385 | 1.0 | 8.0 | 83.0 | 0.0964 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 26.0 | 26 | 4.2432 | 0.0058 | 1830.3748 | 1268.7191 | 71.0 | 299.0 | 0.2375 | 11.0 | 0.0368 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 7.0 | 53.0 | 73.0 | 0.7260 | 0.0959 | 3.0 | 9.0 | 78.0 | 0.1154 | 0.0385 | 1.0 | 8.0 | 83.0 | 0.0964 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 27.0 | 27 | 4.2533 | 0.0058 | 1834.7450 | 1271.7483 | 71.0 | 299.0 | 0.2375 | 11.0 | 0.0368 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 7.0 | 53.0 | 73.0 | 0.7260 | 0.0959 | 3.0 | 9.0 | 78.0 | 0.1154 | 0.0385 | 1.0 | 8.0 | 83.0 | 0.0964 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 28.0 | 28 | 4.2639 | 0.0058 | 1839.2829 | 1274.8938 | 71.0 | 299.0 | 0.2375 | 11.0 | 0.0368 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 7.0 | 53.0 | 73.0 | 0.7260 | 0.0959 | 3.0 | 9.0 | 78.0 | 0.1154 | 0.0385 | 1.0 | 8.0 | 83.0 | 0.0964 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 29.0 | 29 | 4.2638 | 0.0058 | 1839.2620 | 1274.8792 | 70.0 | 299.0 | 0.2341 | 11.0 | 0.0368 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 7.0 | 52.0 | 73.0 | 0.7123 | 0.0959 | 3.0 | 9.0 | 78.0 | 0.1154 | 0.0385 | 1.0 | 8.0 | 83.0 | 0.0964 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 30.0 | 30 | 4.2640 | 0.0058 | 1839.3466 | 1274.9379 | 71.0 | 299.0 | 0.2375 | 11.0 | 0.0368 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 7.0 | 52.0 | 73.0 | 0.7123 | 0.0959 | 3.0 | 9.0 | 78.0 | 0.1154 | 0.0385 | 1.0 | 8.0 | 83.0 | 0.0964 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 31.0 | 31 | 4.2660 | 0.0058 | 1840.1879 | 1275.5211 | 71.0 | 299.0 | 0.2375 | 11.0 | 0.0368 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 7.0 | 52.0 | 73.0 | 0.7123 | 0.0959 | 3.0 | 9.0 | 78.0 | 0.1154 | 0.0385 | 1.0 | 8.0 | 83.0 | 0.0964 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 32.0 | 32 | 4.2655 | 0.0058 | 1839.9913 | 1275.3848 | 70.0 | 299.0 | 0.2341 | 11.0 | 0.0368 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 7.0 | 52.0 | 73.0 | 0.7123 | 0.0959 | 3.0 | 9.0 | 78.0 | 0.1154 | 0.0385 | 1.0 | 8.0 | 83.0 | 0.0964 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 33.0 | 33 | 4.2671 | 0.0058 | 1840.6805 | 1275.8625 | 71.0 | 299.0 | 0.2375 | 11.0 | 0.0368 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 7.0 | 52.0 | 73.0 | 0.7123 | 0.0959 | 3.0 | 9.0 | 78.0 | 0.1154 | 0.0385 | 1.0 | 8.0 | 83.0 | 0.0964 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755583495
|
hakimjustbao
| 2025-08-19T06:32:50Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:32:46Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
oegbo/gemma3-radiography-model
|
oegbo
| 2025-08-19T06:31:45Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-pt-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-pt-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T06:31:34Z
|
---
base_model: unsloth/gemma-3-4b-pt-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** oegbo
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-pt-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
donoway/BoolQ_Llama-3.2-1B-5r42yp3k
|
donoway
| 2025-08-19T06:31:35Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T05:21:38Z
|
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: BoolQ_Llama-3.2-1B-5r42yp3k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BoolQ_Llama-3.2-1B-5r42yp3k
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5466
- Model Preparation Time: 0.0056
- Mdl: 7296.3329
- Accumulated Loss: 5057.4326
- Correct Preds: 2619.0
- Total Preds: 3270.0
- Accuracy: 0.8009
- Correct Gen Preds: 2594.0
- Gen Accuracy: 0.7933
- Correct Gen Preds 9642: 1748.0
- Correct Preds 9642: 1776.0
- Total Labels 9642: 2026.0
- Accuracy 9642: 0.8766
- Gen Accuracy 9642: 0.8628
- Correct Gen Preds 2822: 838.0
- Correct Preds 2822: 843.0
- Total Labels 2822: 1231.0
- Accuracy 2822: 0.6848
- Gen Accuracy 2822: 0.6807
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 120
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 9642 | Correct Preds 9642 | Total Labels 9642 | Accuracy 9642 | Gen Accuracy 9642 | Correct Gen Preds 2822 | Correct Preds 2822 | Total Labels 2822 | Accuracy 2822 | Gen Accuracy 2822 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:----------------------:|:------------------:|:-----------------:|:-------------:|:-----------------:|:----------------------:|:------------------:|:-----------------:|:-------------:|:-----------------:|
| No log | 0 | 0 | 0.7080 | 0.0056 | 3339.8933 | 2315.0376 | 2032.0 | 3270.0 | 0.6214 | 2040.0 | 0.6239 | 2007.0 | 2008.0 | 2026.0 | 0.9911 | 0.9906 | 24.0 | 24.0 | 1231.0 | 0.0195 | 0.0195 |
| 0.4335 | 1.0 | 43 | 0.5330 | 0.0056 | 2514.6645 | 1743.0326 | 2457.0 | 3270.0 | 0.7514 | 2447.0 | 0.7483 | 1619.0 | 1630.0 | 2026.0 | 0.8045 | 0.7991 | 819.0 | 827.0 | 1231.0 | 0.6718 | 0.6653 |
| 0.2605 | 2.0 | 86 | 0.6563 | 0.0056 | 3096.0653 | 2146.0289 | 2450.0 | 3270.0 | 0.7492 | 1969.0 | 0.6021 | 1023.0 | 1427.0 | 2026.0 | 0.7043 | 0.5049 | 939.0 | 1023.0 | 1231.0 | 0.8310 | 0.7628 |
| 0.0158 | 3.0 | 129 | 1.0674 | 0.0056 | 5035.6484 | 3490.4455 | 2536.0 | 3270.0 | 0.7755 | 2378.0 | 0.7272 | 1717.0 | 1872.0 | 2026.0 | 0.9240 | 0.8475 | 654.0 | 664.0 | 1231.0 | 0.5394 | 0.5313 |
| 0.1505 | 4.0 | 172 | 1.4954 | 0.0056 | 7054.8825 | 4890.0719 | 2587.0 | 3270.0 | 0.7911 | 2572.0 | 0.7865 | 1811.0 | 1831.0 | 2026.0 | 0.9038 | 0.8939 | 752.0 | 756.0 | 1231.0 | 0.6141 | 0.6109 |
| 0.0 | 5.0 | 215 | 1.4715 | 0.0056 | 6942.0371 | 4811.8535 | 2611.0 | 3270.0 | 0.7985 | 2575.0 | 0.7875 | 1690.0 | 1727.0 | 2026.0 | 0.8524 | 0.8342 | 877.0 | 884.0 | 1231.0 | 0.7181 | 0.7124 |
| 0.0004 | 6.0 | 258 | 1.5466 | 0.0056 | 7296.3329 | 5057.4326 | 2619.0 | 3270.0 | 0.8009 | 2594.0 | 0.7933 | 1748.0 | 1776.0 | 2026.0 | 0.8766 | 0.8628 | 838.0 | 843.0 | 1231.0 | 0.6848 | 0.6807 |
| 0.0 | 7.0 | 301 | 1.5498 | 0.0056 | 7311.3028 | 5067.8089 | 2617.0 | 3270.0 | 0.8003 | 2587.0 | 0.7911 | 1708.0 | 1739.0 | 2026.0 | 0.8583 | 0.8430 | 871.0 | 878.0 | 1231.0 | 0.7132 | 0.7076 |
| 0.0 | 8.0 | 344 | 1.5583 | 0.0056 | 7351.5687 | 5095.7191 | 2617.0 | 3270.0 | 0.8003 | 2591.0 | 0.7924 | 1708.0 | 1737.0 | 2026.0 | 0.8574 | 0.8430 | 875.0 | 880.0 | 1231.0 | 0.7149 | 0.7108 |
| 0.0 | 9.0 | 387 | 1.5645 | 0.0056 | 7380.4891 | 5115.7652 | 2615.0 | 3270.0 | 0.7997 | 2589.0 | 0.7917 | 1710.0 | 1738.0 | 2026.0 | 0.8578 | 0.8440 | 871.0 | 877.0 | 1231.0 | 0.7124 | 0.7076 |
| 0.0 | 10.0 | 430 | 1.5689 | 0.0056 | 7401.5336 | 5130.3521 | 2615.0 | 3270.0 | 0.7997 | 2593.0 | 0.7930 | 1712.0 | 1738.0 | 2026.0 | 0.8578 | 0.8450 | 873.0 | 877.0 | 1231.0 | 0.7124 | 0.7092 |
| 0.0 | 11.0 | 473 | 1.5753 | 0.0056 | 7431.6332 | 5151.2156 | 2618.0 | 3270.0 | 0.8006 | 2595.0 | 0.7936 | 1713.0 | 1738.0 | 2026.0 | 0.8578 | 0.8455 | 873.0 | 880.0 | 1231.0 | 0.7149 | 0.7092 |
| 0.0 | 12.0 | 516 | 1.5764 | 0.0056 | 7436.8304 | 5154.8180 | 2617.0 | 3270.0 | 0.8003 | 2594.0 | 0.7933 | 1714.0 | 1739.0 | 2026.0 | 0.8583 | 0.8460 | 872.0 | 878.0 | 1231.0 | 0.7132 | 0.7084 |
| 0.0 | 13.0 | 559 | 1.5821 | 0.0056 | 7463.8777 | 5173.5658 | 2616.0 | 3270.0 | 0.8 | 2592.0 | 0.7927 | 1712.0 | 1738.0 | 2026.0 | 0.8578 | 0.8450 | 872.0 | 878.0 | 1231.0 | 0.7132 | 0.7084 |
| 0.0 | 14.0 | 602 | 1.5848 | 0.0056 | 7476.3623 | 5182.2194 | 2615.0 | 3270.0 | 0.7997 | 2592.0 | 0.7927 | 1711.0 | 1737.0 | 2026.0 | 0.8574 | 0.8445 | 873.0 | 878.0 | 1231.0 | 0.7132 | 0.7092 |
| 0.0 | 15.0 | 645 | 1.5866 | 0.0056 | 7484.9367 | 5188.1628 | 2617.0 | 3270.0 | 0.8003 | 2595.0 | 0.7936 | 1712.0 | 1738.0 | 2026.0 | 0.8578 | 0.8450 | 874.0 | 879.0 | 1231.0 | 0.7141 | 0.7100 |
| 0.9802 | 16.0 | 688 | 1.5898 | 0.0056 | 7499.9718 | 5198.5843 | 2617.0 | 3270.0 | 0.8003 | 2597.0 | 0.7942 | 1714.0 | 1738.0 | 2026.0 | 0.8578 | 0.8460 | 875.0 | 879.0 | 1231.0 | 0.7141 | 0.7108 |
| 0.0 | 17.0 | 731 | 1.5963 | 0.0056 | 7530.6554 | 5219.8526 | 2616.0 | 3270.0 | 0.8 | 2597.0 | 0.7942 | 1715.0 | 1739.0 | 2026.0 | 0.8583 | 0.8465 | 874.0 | 877.0 | 1231.0 | 0.7124 | 0.7100 |
| 0.0 | 18.0 | 774 | 1.6015 | 0.0056 | 7555.0401 | 5236.7547 | 2613.0 | 3270.0 | 0.7991 | 2592.0 | 0.7927 | 1712.0 | 1737.0 | 2026.0 | 0.8574 | 0.8450 | 872.0 | 876.0 | 1231.0 | 0.7116 | 0.7084 |
| 0.0 | 19.0 | 817 | 1.5991 | 0.0056 | 7543.8108 | 5228.9712 | 2618.0 | 3270.0 | 0.8006 | 2597.0 | 0.7942 | 1713.0 | 1738.0 | 2026.0 | 0.8578 | 0.8455 | 876.0 | 880.0 | 1231.0 | 0.7149 | 0.7116 |
| 0.0 | 20.0 | 860 | 1.6021 | 0.0056 | 7558.1173 | 5238.8877 | 2616.0 | 3270.0 | 0.8 | 2596.0 | 0.7939 | 1715.0 | 1739.0 | 2026.0 | 0.8583 | 0.8465 | 873.0 | 877.0 | 1231.0 | 0.7124 | 0.7092 |
| 0.0 | 21.0 | 903 | 1.6036 | 0.0056 | 7565.0561 | 5243.6973 | 2614.0 | 3270.0 | 0.7994 | 2594.0 | 0.7933 | 1713.0 | 1737.0 | 2026.0 | 0.8574 | 0.8455 | 873.0 | 877.0 | 1231.0 | 0.7124 | 0.7092 |
| 0.0 | 22.0 | 946 | 1.6052 | 0.0056 | 7572.8549 | 5249.1031 | 2615.0 | 3270.0 | 0.7997 | 2596.0 | 0.7939 | 1713.0 | 1737.0 | 2026.0 | 0.8574 | 0.8455 | 874.0 | 878.0 | 1231.0 | 0.7132 | 0.7100 |
| 0.0 | 23.0 | 989 | 1.6049 | 0.0056 | 7571.4610 | 5248.1369 | 2614.0 | 3270.0 | 0.7994 | 2595.0 | 0.7936 | 1712.0 | 1736.0 | 2026.0 | 0.8569 | 0.8450 | 875.0 | 878.0 | 1231.0 | 0.7132 | 0.7108 |
| 0.0 | 24.0 | 1032 | 1.6037 | 0.0056 | 7565.6381 | 5244.1007 | 2616.0 | 3270.0 | 0.8 | 2597.0 | 0.7942 | 1716.0 | 1739.0 | 2026.0 | 0.8583 | 0.8470 | 873.0 | 877.0 | 1231.0 | 0.7124 | 0.7092 |
| 0.0 | 25.0 | 1075 | 1.6096 | 0.0056 | 7593.4658 | 5263.3894 | 2615.0 | 3270.0 | 0.7997 | 2595.0 | 0.7936 | 1714.0 | 1738.0 | 2026.0 | 0.8578 | 0.8460 | 873.0 | 877.0 | 1231.0 | 0.7124 | 0.7092 |
| 0.0 | 26.0 | 1118 | 1.6081 | 0.0056 | 7586.3418 | 5258.4514 | 2618.0 | 3270.0 | 0.8006 | 2600.0 | 0.7951 | 1717.0 | 1739.0 | 2026.0 | 0.8583 | 0.8475 | 875.0 | 879.0 | 1231.0 | 0.7141 | 0.7108 |
| 0.0 | 27.0 | 1161 | 1.6060 | 0.0056 | 7576.7036 | 5251.7707 | 2615.0 | 3270.0 | 0.7997 | 2594.0 | 0.7933 | 1712.0 | 1737.0 | 2026.0 | 0.8574 | 0.8450 | 874.0 | 878.0 | 1231.0 | 0.7132 | 0.7100 |
| 0.0 | 28.0 | 1204 | 1.6088 | 0.0056 | 7589.7099 | 5260.7860 | 2617.0 | 3270.0 | 0.8003 | 2598.0 | 0.7945 | 1717.0 | 1739.0 | 2026.0 | 0.8583 | 0.8475 | 873.0 | 878.0 | 1231.0 | 0.7132 | 0.7092 |
| 0.0 | 29.0 | 1247 | 1.6068 | 0.0056 | 7580.2581 | 5254.2345 | 2613.0 | 3270.0 | 0.7991 | 2595.0 | 0.7936 | 1717.0 | 1740.0 | 2026.0 | 0.8588 | 0.8475 | 869.0 | 873.0 | 1231.0 | 0.7092 | 0.7059 |
| 0.0 | 30.0 | 1290 | 1.6088 | 0.0056 | 7589.7604 | 5260.8210 | 2616.0 | 3270.0 | 0.8 | 2599.0 | 0.7948 | 1716.0 | 1738.0 | 2026.0 | 0.8578 | 0.8470 | 875.0 | 878.0 | 1231.0 | 0.7132 | 0.7108 |
| 0.0 | 31.0 | 1333 | 1.6060 | 0.0056 | 7576.4338 | 5251.5837 | 2611.0 | 3270.0 | 0.7985 | 2592.0 | 0.7927 | 1713.0 | 1736.0 | 2026.0 | 0.8569 | 0.8455 | 871.0 | 875.0 | 1231.0 | 0.7108 | 0.7076 |
| 0.0 | 32.0 | 1376 | 1.6103 | 0.0056 | 7596.7626 | 5265.6746 | 2618.0 | 3270.0 | 0.8006 | 2599.0 | 0.7948 | 1716.0 | 1740.0 | 2026.0 | 0.8588 | 0.8470 | 875.0 | 878.0 | 1231.0 | 0.7132 | 0.7108 |
| 0.0 | 33.0 | 1419 | 1.6099 | 0.0056 | 7594.6633 | 5264.2194 | 2612.0 | 3270.0 | 0.7988 | 2594.0 | 0.7933 | 1715.0 | 1737.0 | 2026.0 | 0.8574 | 0.8465 | 871.0 | 875.0 | 1231.0 | 0.7108 | 0.7076 |
| 0.0 | 34.0 | 1462 | 1.6107 | 0.0056 | 7598.6742 | 5266.9996 | 2616.0 | 3270.0 | 0.8 | 2597.0 | 0.7942 | 1716.0 | 1738.0 | 2026.0 | 0.8578 | 0.8470 | 873.0 | 878.0 | 1231.0 | 0.7132 | 0.7092 |
| 0.0 | 35.0 | 1505 | 1.6082 | 0.0056 | 7586.7298 | 5258.7204 | 2617.0 | 3270.0 | 0.8003 | 2601.0 | 0.7954 | 1718.0 | 1738.0 | 2026.0 | 0.8578 | 0.8480 | 874.0 | 879.0 | 1231.0 | 0.7141 | 0.7100 |
| 0.0 | 36.0 | 1548 | 1.6120 | 0.0056 | 7604.7402 | 5271.2042 | 2617.0 | 3270.0 | 0.8003 | 2601.0 | 0.7954 | 1718.0 | 1738.0 | 2026.0 | 0.8578 | 0.8480 | 875.0 | 879.0 | 1231.0 | 0.7141 | 0.7108 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
kjn96/andrea
|
kjn96
| 2025-08-19T06:30:48Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-19T06:06:49Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: Andrea
---
# Andrea
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `Andrea` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "Andrea",
"lora_weights": "https://huggingface.co/kjn96/andrea/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('kjn96/andrea', weight_name='lora.safetensors')
image = pipeline('Andrea').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 1000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/kjn96/andrea/discussions) to add images that show off what you’ve made with this LoRA.
|
kunpengshi001/dummy-model
|
kunpengshi001
| 2025-08-19T06:28:50Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2025-08-19T06:26:39Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
harikrushna2272/SmolGRPO-135M
|
harikrushna2272
| 2025-08-19T06:28:48Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"GRPO",
"Reasoning-Course",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T06:28:14Z
|
---
library_name: transformers
tags:
- GRPO
- Reasoning-Course
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shiimi/labse-dhivehi-finetuned
|
shiimi
| 2025-08-19T06:28:01Z
| 0
| 0
|
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:968266",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/LaBSE",
"base_model:finetune:sentence-transformers/LaBSE",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-19T05:46:17Z
|
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:968266
- loss:CosineSimilarityLoss
base_model: sentence-transformers/LaBSE
widget:
- source_sentence: ކުއްލިއަކަށް ދޮންބެ ތެދުވެ އިނދެ ދެފައި ވައްކޮއްލިއެވެ. ދެލޯ ބޮޑުކޮއްގެން
ހުރެ ހެވެމުން ދިލެމުން ގޮސް އަހަރެން ހުޅުވާލީވެސް ދޮންބެ ބުނި ކަބަޑެވެ. ގެރިގުއި
ކުލައިގެ ކަރުދާހަކުން ބަންދުކޮއްފައި އޮތް ފޮށިގަނޑެއް ފެނުމާއި އެކު އަހަރެންނަށް
ބަލާލެވުނީ ގޮދަނޑިމަތީގައި ދެފައި ވަށްކޮއްގެން އިން ބޭބެ އާއި ދިމާއަށެވެ.
sentences:
- sheet covering coffin
- The king's kidneys, heart and lungs have also stopped working, Saudi health officials
said, according to Press TV.
- The Civil Court of Maldives has ordered the seizure of passports and freezing
bank accounts belonging to Haulath Faheem, wife of former President Dr. Mohamed
Jamil, as well as seven other members of his family in connection with a case
of proven debt. This was decided by the court today after an action filed by Mohammad
Aniis who served as General Manager at four resorts owned by Three A Company when
it was not being divided into shares. The heir was not present at the court. The
lawyer for the heirs said that he has appealed to the High Court against this
decision. In any case of proven debt, it is a common practice in courts to hold
passports and freeze accounts as part of an application for enforcement of judgment
when there are no payments made by debtors. The family appealed the Civil Court’s
order to pay them back, which was then reviewed by the Supreme Court. In addition
to the three charges, Anies also brought another two cases against Musa Fahim’s
heirs. The other accused are Haulat and Shaheed as well as Farida Ibrahim, Ahmad
Shahid Shiyam, Ali Shiyam, Hassan Shiyam, Maryam Shifa and Aimanat Ashfah. The
two brothers’ son Anies said he owes the company 1.8 million rupees for days when
senior management was not paid due to problems arising from the split of Three
Airline Company Ltd (THAC). The order was issued in response to a case filed by
Anis at the Civil Court on May 15, requesting payment of Rs.731,540.80 due from
his family following an appeal ruling made on February 17 this year. He said that
no appeal had been lodged against the judgment for over ninety days and he is
still waiting for the decision to be announced.
- source_sentence: 24 ޖުލައި 2013 ގައި ޖޯން ހޮޖްމަން މެކްސިމަމް ފަން ޕޮޑްކާސްޓް ``
ޖަޖް ބްރަދަރ އަލީ '' އިން ފެނިގެންދިޔައީ '' އެކްސްޕާޓް ވިޓްނަސް '' ގެ ގޮތުގައެވެ
.
sentences:
- Translate the following sentence into a different language and add a proof of
the translation in the footnotes. Traer tu propia bolsa es una elección ecológica.
<sup>1</sup> --- <sup>1</sup> Translation from English to Spanish using Google
Translate.
- The result sheet of the Ihwandu constituency, which is part of the North East
District Council was lost and it has been found while reopening a ballot box.
It had to be counted again after that because the results were missing. In presence
of representatives from candidates who contested for this district as well as
media, the election commission opened the ballot box at 8:30 p.m. today when they
discovered the result sheet in another letter. The results sheet was mistakenly
placed in a wrong envelope.The Election Commission decided that the ballot box
did not need to be counted after seeing its result sheet.This is the first election
with an issue of this kind. The Complaints Bureau has not received any complaints
from the voters that would require a ballot box to be reopened, said Election
Commission Director General Mohamed Sheik. The Commission said that 60 percent
of the total number of results sheets, which is estimated to be around 17,000
have been cleared.
- Outline the following passage I. American astronauts' exploration of the moon
A. Began in 1969 B. Building of moon bases C. Driving lunar rovers on the surface
D. Collection of moon samples.
- source_sentence: އަދި ލަންގޭންސްޓައިންބާކް އާއި އަލަށް އުފެއްދި ޝިސްޝުޓެނަކަރ ރޭލްވޭ
ސްޓޭޝަނާ ދެމެދު 2011 ވަނަ އަހަރު ކުރު ޑަބަލް ޓްރެކެއް ވެސް ހެދިއެވެ .
sentences:
- i told them i would personally be delighted if sia would fly to and from europe
via the maldives.
- A short double track was also built between Langensteinbach and the newly created
Schießhüttenäcker railway station in 2011 .
- Offer one suggestion to reduce cases of teenage suicide. One suggestion to reduce
cases of teenage suicide could be to provide accessible and safe mental health
support for teenagers. This could be in the form of school counselors, teen helplines,
or mental health workshops, among other resources. By ensuring that teenagers
have someone to talk to about their struggles and concerns, it can alleviate feelings
of hopelessness and isolation, which are major risk factors for suicide.
- source_sentence: އަޖީއެމްއެޗްގެ އަހަރި ދުވަހާއި ގުޅުވައިގެން ބާއްވާ މި ފެއާއަށް
ދާ ފަރާތްތަކަށް ހިލޭ ގުލްކޯޒް، ހަކުރު، އަދި ލޭގެ ޕްރެޝަރު ހުރި މިންވަރު ބަލައިދެމުންދާ
ކަމަށް އައިޖީއެމްއެޗުން ބުނެއެވެ.
sentences:
- A young man died in a serious accident on the road at night. The victim was identified
as Hussain Adham, 21 years old from Hithadhoo. The 54-year old man died at the
hospital after being treated for a heart attack. According to witnesses, the accident
occurred when Adham was driving from Hittadu towards Maradu and collided with
another motorbike that had been travelling along Link Road in direction of Maradu.
The accident resulted in a severe fracture of his head and extensive bleeding.
He was also broken his neck and a hand. "The helmet he was wearing broke and his
head got injured. The injuries were severe," the witness said. Some of the victims
had broken their hands and feet. A woman was among the victims.
- NASA has announced that it will test a new type of flying saucer this year. It
may be to bring in aliens who have not yet landed on the earth. The cup-style
vehicle will be launched by what NASA calls a "low density supersonic decelerator"
rocket. The rocket is scheduled to be launched in June. NASA is interested in
launching a flying saucer into the atmosphere, but according to their own statements,
there's no connection between aliens and NASA's Flying Saucer. NASA wants to test
and demonstrate new technologies that can be used for launching objects into the
atmosphere. NASA said the mission will help to estimate how much payload is needed
for a manned Mars missions.
- Ar.... Arfin? Are you telling the truth? Is the child so good now? How many years
have passed since then... If you haven't even heard from the boy, you can hear
what Asiya is saying, I really want to see you, Asiya, please come here with Arfin,
if you have his number I want to call him now
- source_sentence: އޭނާ ރީތި.
sentences:
- She's pretty.
- Words of gratitude are being sent to the government and President Yameen for bringing
two new generators to the village within five days. The people of Thonadhoo have
shown the whole country that they have a people who love patience, unity and brotherhood.
It is a beautiful example of unity. The burden and pain of the power outages is
not easy for anyone to bear in such an era.
- 'Date of appointment: 22 June'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on sentence-transformers/LaBSE
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE) <!-- at revision 836121a0533e5664b21c7aacc5d22951f2b8b25b -->
- **Maximum Sequence Length:** 256 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False, 'architecture': 'BertModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(3): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("shiimi/labse-dhivehi-finetuned")
# Run inference
sentences = [
'އޭނާ ރީތި.',
"She's pretty.",
'Words of gratitude are being sent to the government and President Yameen for bringing two new generators to the village within five days. The people of Thonadhoo have shown the whole country that they have a people who love patience, unity and brotherhood. It is a beautiful example of unity. The burden and pain of the power outages is not easy for anyone to bear in such an era.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, 0.9827, -0.0089],
# [ 0.9827, 1.0000, -0.0044],
# [-0.0089, -0.0044, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 968,266 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 3 tokens</li><li>mean: 121.67 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 3 tokens</li><li>mean: 64.68 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.51</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
| <code>އިންތިހާބު ލަސްކުރަން ބްލެޓާ ބޭނުމެއްނުވޭ: ފީފާ</code> | <code>The Ponoru River is a tributary of the Horezu in Romania .</code> | <code>0.0</code> |
| <code>ޖޯ އުފަންވީ 27 މާރޗް 1929 ގައި މެސެޗުސެޓްސްގެ ސޮމަރވިލް އަށް ކަމަށާއި ބޮޑުވީ މެސެޗުސެޓްސްގެ ކުއިންސީ ގައެވެ .</code> | <code>The National Inquiry Commission set up by the government of President Mohammed Vaheed Hassan Manik has said that the coup was not a coup and that the government was overthrown according to the rules of law.</code> | <code>0.0</code> |
| <code>ސާބިތު ދަރަނީގެ މައްސަލައެއްގައި ޑރ. މުހައްމަދު ޖަމީލްގެ އަނބިކަނބަލުން ހައުލަތު ފަހީމް އާއި އެ އާއިލާގެ އިތުރު ހަތް މީހެއްގެ ޕާސްޕޯޓް ހިފަހައްޓައި ބޭންކް އެކައުންޓްތައް ފްރީޒްކުރުމަށް ސިވިލް ކޯޓުން މިއަދު އަމުރު ނެރެފި އެވެ.ވީބީ އައްޑޫ އެފްސީގެ މުއައްސިސެއް ކަމަށްވާ މުހަންމަދު ޝަވީދުގެ ވެސް ބައްޕަ މަރުހޫމް މޫސާ ފަހީމްގެ އަށް ވާރިސުންގެ ޕާސްޕޯޓާއި، ރާއްޖޭގެ ބޭންކްތަކުގައި ހުރި ހުރިހާ އެކައުންޓެއް ހިފަހައްޓަން ސިވިލް ކޯޓުން މިއަދު ހެނދުނު ނިންމީ، ތްރީއޭ ކޮމްޕެނީ ނުބަހާއިރު އެ ކުންފުނީގެ ހަތަރު ރިސޯޓެއްގެ ޖެނެރަލް މެނޭޖަރެއްގެ ގޮތުގައި ވަޒީފާ އަދާކުރި މުހަންމަދު އަނީސް ކޮށްފައިވާ ދައުވާއަކާ ގުޅިގެން ބޭއްވި ޝަރީއަތުގެ އަޑުއެހުމުގަ އެވެ. އެ އަޑުއެހުމަށް ވާރިސުންގެ ފަރާތުން ހާޒިރެއް ނުވެ އެވެ. ވާރިސުންގެ ވަކީލް ވިދާޅުވީ ސިވިލް ކޯޓުގެ ހުކުމް ހައި ކޯޓަށް އިސްތިއުނާފަށް ހުށަހަޅާފައިވާ ކަމަށެވެ.ސާބިތު ދަރަނީގެ ކޮންމެ މައްސަލައެއްގައި ވެސް ދަރަނި އަދާނުކުރާ ހާލަތެއްގައި، ހުކުމް ތަންފީޒުކުރުމަށް އެދި ހުށަހަޅެމުން ޕާސްޕޯޓް ހިފަހައްޓައި އެކައުންޓުތައް ފްރީޒްކުރުމަކީ ކޯޓުން އަމަލުކުރާ އާންމު އުސޫލެވ...</code> | <code>The Civil Court of Maldives has ordered the seizure of passports and freezing bank accounts belonging to Haulath Faheem, wife of former President Dr. Mohamed Jamil, as well as seven other members of his family in connection with a case of proven debt. This was decided by the court today after an action filed by Mohammad Aniis who served as General Manager at four resorts owned by Three A Company when it was not being divided into shares. The heir was not present at the court. The lawyer for the heirs said that he has appealed to the High Court against this decision. In any case of proven debt, it is a common practice in courts to hold passports and freeze accounts as part of an application for enforcement of judgment when there are no payments made by debtors. The family appealed the Civil Court’s order to pay them back, which was then reviewed by the Supreme Court. In addition to the three charges, Anies also brought another two cases against Musa Fahim’s heirs. The other accused are ...</code> | <code>1.0</code> |
* Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
```json
{
"loss_fct": "torch.nn.modules.loss.MSELoss"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 1
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.0661 | 500 | 0.0528 |
| 0.1322 | 1000 | 0.0298 |
| 0.1983 | 1500 | 0.0261 |
| 0.2644 | 2000 | 0.0242 |
| 0.3305 | 2500 | 0.0235 |
| 0.3966 | 3000 | 0.0223 |
| 0.4627 | 3500 | 0.0207 |
| 0.5288 | 4000 | 0.0208 |
| 0.5948 | 4500 | 0.0196 |
| 0.6609 | 5000 | 0.0192 |
| 0.7270 | 5500 | 0.019 |
| 0.7931 | 6000 | 0.0181 |
| 0.8592 | 6500 | 0.0181 |
| 0.9253 | 7000 | 0.0175 |
| 0.9914 | 7500 | 0.0178 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 5.1.0
- Transformers: 4.55.2
- PyTorch: 2.8.0+cu128
- Accelerate: 1.9.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
hwang2006/finetuned-korean-gpt-oss-20b
|
hwang2006
| 2025-08-19T06:27:35Z
| 0
| 0
|
peft
|
[
"peft",
"safetensors",
"unsloth",
"lora",
"korean",
"education",
"textbook",
"gpt-oss",
"한국어",
"교육",
"파인튜닝",
"text-generation",
"conversational",
"ko",
"dataset:maywell/korean_textbooks",
"base_model:unsloth/gpt-oss-20b",
"base_model:adapter:unsloth/gpt-oss-20b",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-19T06:27:31Z
|
---
license: apache-2.0
base_model: unsloth/gpt-oss-20b
tags:
- unsloth
- lora
- korean
- education
- textbook
- gpt-oss
- 한국어
- 교육
- 파인튜닝
language:
- ko
datasets:
- maywell/korean_textbooks
library_name: peft
pipeline_tag: text-generation
---
# 한국어 교육 자료 파인튜닝 모델 (Korean Textbook Fine-tuned Model)
## 📚 모델 소개
이 모델은 **unsloth/gpt-oss-20b**를 기반으로 **maywell/korean_textbooks** 데이터셋으로 파인튜닝된 한국어 교육 전용 모델입니다.
LoRA(Low-Rank Adaptation) 기술을 사용하여 효율적으로 학습되었으며, 한국어 교육 콘텐츠 생성에 특화되어 있습니다.
## 🎯 주요 특징
- **베이스 모델**: unsloth/gpt-oss-20b (20B 파라미터)
- **훈련 방법**: LoRA (Low-Rank Adaptation)
- **특화 분야**: 한국어 교육 콘텐츠 생성
- **데이터셋**: maywell/korean_textbooks
- **언어**: 한국어 (Korean)
## 🚀 사용 방법
### 모델 로드
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
# 베이스 모델 로드
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/gpt-oss-20b",
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True
)
# LoRA 어댑터 로드
model = PeftModel.from_pretrained(base_model, "hwang2006/finetuned-korean-gpt-oss-20b")
# 토크나이저 로드
tokenizer = AutoTokenizer.from_pretrained("hwang2006/finetuned-korean-gpt-oss-20b")
```
### 사용 예시
```python
messages = [
{"role": "system", "content": "당신은 한국어로 교육 내용을 설명하는 도움이 되는 어시스턴트입니다."},
{"role": "user", "content": "2의 거듭제곱에 대해 설명해주세요."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True
).to(model.device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## 📊 훈련 정보
- **베이스 모델**: unsloth/gpt-oss-20b-unsloth-bnb-4bit
- **훈련 스텝**: 30 steps
- **LoRA Rank**: 8
- **LoRA Alpha**: 16
- **타겟 모듈**: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- **데이터셋**: maywell/korean_textbooks
## 🎓 활용 분야
이 모델은 다음 분야에서 우수한 성능을 보입니다:
### 수학 (Mathematics)
- 기초 수학 개념 설명
- 대수, 기하, 미적분 문제 해설
- 수학 공식의 직관적 이해
### 과학 (Science)
- 물리, 화학, 생물학 원리 설명
- 실험 과정 및 결과 해석
- 과학적 현상의 이해
### 언어 (Language)
- 한국어 문법 및 어휘 설명
- 문학 작품 분석 및 해석
- 글쓰기 기법 안내
### 사회 (Social Studies)
- 역사적 사건 및 인물 설명
- 지리적 개념 및 현상
- 사회 제도 및 문화 이해
## 💻 시스템 요구사항
- **GPU 메모리**: 최소 16GB (권장 24GB+)
- **시스템 RAM**: 최소 16GB
- **Python**: 3.8+
- **주요 라이브러리**: transformers, peft, torch
## ⚠️ 주의사항
1. **교육 목적 특화**: 이 모델은 교육 콘텐츠 생성에 최적화되어 있습니다.
2. **한국어 중심**: 한국어 외의 언어에서는 성능이 제한적일 수 있습니다.
3. **사실 확인 필요**: 생성된 내용은 항상 검토하고 사실 확인이 필요합니다.
4. **윤리적 사용**: 교육적이고 건전한 목적으로만 사용해주세요.
## 🔗 관련 링크
- **베이스 모델**: [unsloth/gpt-oss-20b](https://huggingface.co/unsloth/gpt-oss-20b)
- **데이터셋**: [maywell/korean_textbooks](https://huggingface.co/datasets/maywell/korean_textbooks)
## 📜 라이선스
이 모델은 베이스 모델인 unsloth/gpt-oss-20b의 라이선스를 따릅니다.
|
ariankharazmi/Curiosity-14
|
ariankharazmi
| 2025-08-19T06:27:29Z
| 3
| 0
| null |
[
"safetensors",
"gpt2",
"research",
"text-generation",
"en",
"base_model:openai-community/gpt2",
"base_model:finetune:openai-community/gpt2",
"license:mit",
"region:us"
] |
text-generation
| 2025-04-25T03:43:28Z
|
---
license: mit
language:
- en
base_model:
- openai-community/gpt2
pipeline_tag: text-generation
tags:
- research
---
Curiosity-14 is a low-level LLM.
Built throughout the seven weeks of the Summer 2024 UCinci EEP, Curiosity-14 is the culmination of all of the research, coded deliverables, and painstaking patience as one final advanced deliverable.
|
BootesVoid/cmei466j90qj6rts8qru8anlz_cmei4jw3t0qjyrts8691kch1z
|
BootesVoid
| 2025-08-19T06:26:24Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-19T06:26:23Z
|
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ARIXITA
---
# Cmei466J90Qj6Rts8Qru8Anlz_Cmei4Jw3T0Qjyrts8691Kch1Z
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ARIXITA` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "ARIXITA",
"lora_weights": "https://huggingface.co/BootesVoid/cmei466j90qj6rts8qru8anlz_cmei4jw3t0qjyrts8691kch1z/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmei466j90qj6rts8qru8anlz_cmei4jw3t0qjyrts8691kch1z', weight_name='lora.safetensors')
image = pipeline('ARIXITA').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmei466j90qj6rts8qru8anlz_cmei4jw3t0qjyrts8691kch1z/discussions) to add images that show off what you’ve made with this LoRA.
|
thailevann/track8_subtask1_v3
|
thailevann
| 2025-08-19T06:26:07Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-4B-Thinking-2507-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-4B-Thinking-2507-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T02:46:57Z
|
---
base_model: unsloth/Qwen3-4B-Thinking-2507-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thailevann
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B-Thinking-2507-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
0.97
|
ShuklaShreyansh/LoraModel
|
ShuklaShreyansh
| 2025-08-19T06:24:50Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T06:24:42Z
|
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** ShuklaShreyansh
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
donoway/ARC-Challenge_Llama-3.2-1B-h7hk8kox
|
donoway
| 2025-08-19T06:22:39Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T06:12:00Z
|
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Challenge_Llama-3.2-1B-h7hk8kox
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Challenge_Llama-3.2-1B-h7hk8kox
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5495
- Model Preparation Time: 0.0059
- Mdl: 668.3965
- Accumulated Loss: 463.2971
- Correct Preds: 82.0
- Total Preds: 299.0
- Accuracy: 0.2742
- Correct Gen Preds: 82.0
- Gen Accuracy: 0.2742
- Correct Gen Preds 32: 0.0
- Correct Preds 32: 0.0
- Total Labels 32: 64.0
- Accuracy 32: 0.0
- Gen Accuracy 32: 0.0
- Correct Gen Preds 33: 38.0
- Correct Preds 33: 38.0
- Total Labels 33: 73.0
- Accuracy 33: 0.5205
- Gen Accuracy 33: 0.5205
- Correct Gen Preds 34: 0.0
- Correct Preds 34: 0.0
- Total Labels 34: 78.0
- Accuracy 34: 0.0
- Gen Accuracy 34: 0.0
- Correct Gen Preds 35: 44.0
- Correct Preds 35: 44.0
- Total Labels 35: 83.0
- Accuracy 35: 0.5301
- Gen Accuracy 35: 0.5301
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 1.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.6389 | 0.0059 | 706.9523 | 490.0220 | 66.0 | 299.0 | 0.2207 | 66.0 | 0.2207 | 62.0 | 62.0 | 64.0 | 0.9688 | 0.9688 | 0.0 | 0.0 | 73.0 | 0.0 | 0.0 | 4.0 | 4.0 | 78.0 | 0.0513 | 0.0513 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.8312 | 1.0 | 1 | 1.6389 | 0.0059 | 706.9523 | 490.0220 | 66.0 | 299.0 | 0.2207 | 66.0 | 0.2207 | 62.0 | 62.0 | 64.0 | 0.9688 | 0.9688 | 0.0 | 0.0 | 73.0 | 0.0 | 0.0 | 4.0 | 4.0 | 78.0 | 0.0513 | 0.0513 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.8312 | 2.0 | 2 | 2.4321 | 0.0059 | 1049.1447 | 727.2117 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.5198 | 3.0 | 3 | 1.5495 | 0.0059 | 668.3965 | 463.2971 | 82.0 | 299.0 | 0.2742 | 82.0 | 0.2742 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 38.0 | 38.0 | 73.0 | 0.5205 | 0.5205 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 44.0 | 44.0 | 83.0 | 0.5301 | 0.5301 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.8959 | 4.0 | 4 | 1.8798 | 0.0059 | 810.8944 | 562.0692 | 72.0 | 299.0 | 0.2408 | 62.0 | 0.2074 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 62.0 | 71.0 | 73.0 | 0.9726 | 0.8493 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.3015 | 5.0 | 5 | 2.5108 | 0.0059 | 1083.0743 | 750.7299 | 74.0 | 299.0 | 0.2475 | 69.0 | 0.2308 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 69.0 | 72.0 | 73.0 | 0.9863 | 0.9452 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.028 | 6.0 | 6 | 3.4943 | 0.0059 | 1507.3419 | 1044.8098 | 77.0 | 299.0 | 0.2575 | 55.0 | 0.1839 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 45.0 | 64.0 | 73.0 | 0.8767 | 0.6164 | 9.0 | 10.0 | 78.0 | 0.1282 | 0.1154 | 1.0 | 1.0 | 83.0 | 0.0120 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0006 | 7.0 | 7 | 4.9362 | 0.0059 | 2129.3200 | 1475.9322 | 78.0 | 299.0 | 0.2609 | 59.0 | 0.1973 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 42.0 | 59.0 | 73.0 | 0.8082 | 0.5753 | 15.0 | 16.0 | 78.0 | 0.2051 | 0.1923 | 2.0 | 3.0 | 83.0 | 0.0361 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 8.0 | 8 | 6.1133 | 0.0059 | 2637.0566 | 1827.8683 | 78.0 | 299.0 | 0.2609 | 58.0 | 0.1940 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 38.0 | 55.0 | 73.0 | 0.7534 | 0.5205 | 18.0 | 20.0 | 78.0 | 0.2564 | 0.2308 | 2.0 | 3.0 | 83.0 | 0.0361 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 9.0 | 9 | 7.0013 | 0.0059 | 3020.1169 | 2093.3855 | 77.0 | 299.0 | 0.2575 | 57.0 | 0.1906 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 35.0 | 51.0 | 73.0 | 0.6986 | 0.4795 | 20.0 | 24.0 | 78.0 | 0.3077 | 0.2564 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 10.0 | 10 | 7.6698 | 0.0059 | 3308.5009 | 2293.2781 | 74.0 | 299.0 | 0.2475 | 57.0 | 0.1906 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 32.0 | 45.0 | 73.0 | 0.6164 | 0.4384 | 23.0 | 27.0 | 78.0 | 0.3462 | 0.2949 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 11.0 | 11 | 8.1617 | 0.0059 | 3520.6606 | 2440.3360 | 76.0 | 299.0 | 0.2542 | 56.0 | 0.1873 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 31.0 | 44.0 | 73.0 | 0.6027 | 0.4247 | 23.0 | 30.0 | 78.0 | 0.3846 | 0.2949 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 12 | 8.4710 | 0.0059 | 3654.0852 | 2532.8188 | 76.0 | 299.0 | 0.2542 | 55.0 | 0.1839 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 29.0 | 44.0 | 73.0 | 0.6027 | 0.3973 | 24.0 | 30.0 | 78.0 | 0.3846 | 0.3077 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 13 | 8.6916 | 0.0059 | 3749.2704 | 2598.7962 | 76.0 | 299.0 | 0.2542 | 54.0 | 0.1806 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 27.0 | 44.0 | 73.0 | 0.6027 | 0.3699 | 25.0 | 30.0 | 78.0 | 0.3846 | 0.3205 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 14 | 8.8348 | 0.0059 | 3811.0335 | 2641.6071 | 73.0 | 299.0 | 0.2441 | 51.0 | 0.1706 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 24.0 | 42.0 | 73.0 | 0.5753 | 0.3288 | 25.0 | 29.0 | 78.0 | 0.3718 | 0.3205 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 15 | 8.9019 | 0.0059 | 3839.9915 | 2661.6793 | 73.0 | 299.0 | 0.2441 | 52.0 | 0.1739 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 24.0 | 41.0 | 73.0 | 0.5616 | 0.3288 | 26.0 | 30.0 | 78.0 | 0.3846 | 0.3333 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 16.0 | 16 | 8.9631 | 0.0059 | 3866.3889 | 2679.9766 | 74.0 | 299.0 | 0.2475 | 51.0 | 0.1706 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 23.0 | 41.0 | 73.0 | 0.5616 | 0.3151 | 26.0 | 31.0 | 78.0 | 0.3974 | 0.3333 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 17.0 | 17 | 8.9824 | 0.0059 | 3874.6860 | 2685.7277 | 72.0 | 299.0 | 0.2408 | 48.0 | 0.1605 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 20.0 | 40.0 | 73.0 | 0.5479 | 0.2740 | 26.0 | 30.0 | 78.0 | 0.3846 | 0.3333 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 18.0 | 18 | 9.0320 | 0.0059 | 3896.1027 | 2700.5726 | 72.0 | 299.0 | 0.2408 | 49.0 | 0.1639 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 21.0 | 40.0 | 73.0 | 0.5479 | 0.2877 | 26.0 | 30.0 | 78.0 | 0.3846 | 0.3333 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 19.0 | 19 | 9.0877 | 0.0059 | 3920.1086 | 2717.2122 | 73.0 | 299.0 | 0.2441 | 50.0 | 0.1672 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 21.0 | 40.0 | 73.0 | 0.5479 | 0.2877 | 27.0 | 31.0 | 78.0 | 0.3974 | 0.3462 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 20.0 | 20 | 9.1178 | 0.0059 | 3933.0995 | 2726.2168 | 72.0 | 299.0 | 0.2408 | 48.0 | 0.1605 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 19.0 | 38.0 | 73.0 | 0.5205 | 0.2603 | 27.0 | 32.0 | 78.0 | 0.4103 | 0.3462 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 21.0 | 21 | 9.1254 | 0.0059 | 3936.3758 | 2728.4878 | 73.0 | 299.0 | 0.2441 | 49.0 | 0.1639 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 20.0 | 39.0 | 73.0 | 0.5342 | 0.2740 | 27.0 | 32.0 | 78.0 | 0.4103 | 0.3462 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 22.0 | 22 | 9.1317 | 0.0059 | 3939.0954 | 2730.3729 | 73.0 | 299.0 | 0.2441 | 51.0 | 0.1706 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 22.0 | 40.0 | 73.0 | 0.5479 | 0.3014 | 27.0 | 31.0 | 78.0 | 0.3974 | 0.3462 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 23.0 | 23 | 9.1612 | 0.0059 | 3951.8407 | 2739.2072 | 73.0 | 299.0 | 0.2441 | 49.0 | 0.1639 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 19.0 | 38.0 | 73.0 | 0.5205 | 0.2603 | 28.0 | 33.0 | 78.0 | 0.4231 | 0.3590 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 24.0 | 24 | 9.1903 | 0.0059 | 3964.3817 | 2747.9000 | 72.0 | 299.0 | 0.2408 | 50.0 | 0.1672 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 21.0 | 39.0 | 73.0 | 0.5342 | 0.2877 | 27.0 | 31.0 | 78.0 | 0.3974 | 0.3462 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 25.0 | 25 | 9.1768 | 0.0059 | 3958.5709 | 2743.8723 | 73.0 | 299.0 | 0.2441 | 50.0 | 0.1672 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 20.0 | 39.0 | 73.0 | 0.5342 | 0.2740 | 28.0 | 32.0 | 78.0 | 0.4103 | 0.3590 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 26.0 | 26 | 9.1828 | 0.0059 | 3961.1261 | 2745.6434 | 74.0 | 299.0 | 0.2475 | 51.0 | 0.1706 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 21.0 | 39.0 | 73.0 | 0.5342 | 0.2877 | 28.0 | 33.0 | 78.0 | 0.4231 | 0.3590 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 27.0 | 27 | 9.1905 | 0.0059 | 3964.4725 | 2747.9630 | 73.0 | 299.0 | 0.2441 | 49.0 | 0.1639 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 19.0 | 39.0 | 73.0 | 0.5342 | 0.2603 | 28.0 | 32.0 | 78.0 | 0.4103 | 0.3590 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 28.0 | 28 | 9.2214 | 0.0059 | 3977.7838 | 2757.1896 | 74.0 | 299.0 | 0.2475 | 51.0 | 0.1706 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 21.0 | 39.0 | 73.0 | 0.5342 | 0.2877 | 28.0 | 33.0 | 78.0 | 0.4231 | 0.3590 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 29.0 | 29 | 9.2087 | 0.0059 | 3972.3245 | 2753.4055 | 73.0 | 299.0 | 0.2441 | 51.0 | 0.1706 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 22.0 | 38.0 | 73.0 | 0.5205 | 0.3014 | 27.0 | 33.0 | 78.0 | 0.4231 | 0.3462 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 30.0 | 30 | 9.1917 | 0.0059 | 3964.9699 | 2748.3077 | 73.0 | 299.0 | 0.2441 | 50.0 | 0.1672 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 20.0 | 38.0 | 73.0 | 0.5205 | 0.2740 | 28.0 | 33.0 | 78.0 | 0.4231 | 0.3590 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 31.0 | 31 | 9.2166 | 0.0059 | 3975.7263 | 2755.7635 | 73.0 | 299.0 | 0.2441 | 49.0 | 0.1639 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 19.0 | 39.0 | 73.0 | 0.5342 | 0.2603 | 28.0 | 32.0 | 78.0 | 0.4103 | 0.3590 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 32.0 | 32 | 9.2286 | 0.0059 | 3980.9125 | 2759.3583 | 74.0 | 299.0 | 0.2475 | 49.0 | 0.1639 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 19.0 | 39.0 | 73.0 | 0.5342 | 0.2603 | 28.0 | 33.0 | 78.0 | 0.4231 | 0.3590 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 33.0 | 33 | 9.1959 | 0.0059 | 3966.7926 | 2749.5711 | 73.0 | 299.0 | 0.2441 | 50.0 | 0.1672 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 20.0 | 38.0 | 73.0 | 0.5205 | 0.2740 | 28.0 | 33.0 | 78.0 | 0.4231 | 0.3590 | 2.0 | 2.0 | 83.0 | 0.0241 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
quantumxnode/blockassist-bc-dormant_peckish_seahorse_1755582934
|
quantumxnode
| 2025-08-19T06:21:45Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"dormant peckish seahorse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:21:41Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- dormant peckish seahorse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755584377
|
IvanJAjebu
| 2025-08-19T06:21:20Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:20:56Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ChenWu98/numina_qwen_2.5_sft_cluster_split_0
|
ChenWu98
| 2025-08-19T06:21:10Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-1.5B",
"base_model:finetune:Qwen/Qwen2.5-1.5B",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T06:19:18Z
|
---
base_model: Qwen/Qwen2.5-1.5B
library_name: transformers
model_name: numina_qwen_2.5_sft_cluster_split_0
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for numina_qwen_2.5_sft_cluster_split_0
This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/0ktvdv3m)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
VoilaRaj/78_bZOeug
|
VoilaRaj
| 2025-08-19T06:19:34Z
| 0
| 0
| null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-19T06:15:41Z
|
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
tencent/Hunyuan-GameCraft-1.0
|
tencent
| 2025-08-19T06:19:16Z
| 0
| 384
| null |
[
"safetensors",
"image-to-video",
"en",
"arxiv:2506.17201",
"region:us"
] |
image-to-video
| 2025-08-13T07:10:08Z
|
---
pipeline_tag: image-to-video
language:
- en
extra_gated_eu_disallowed: true
---
<!-- ## **Hunyuan-GameCraft** -->
<!-- <p align="center">
<img src="assets/material/logo.png" height=100>
</p> -->
# **Hunyuan-GameCraft** 🎮
<div align="center">
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-GameCraft-1.0"><img src="https://img.shields.io/static/v1?label=Hunyuan-GameCraft-1.0%20Code&message=Github&color=blue"></a>  
<a href="https://hunyuan-gamecraft.github.io/"><img src="https://img.shields.io/static/v1?label=Project%20Page&message=Web&color=green"></a>  
<a href="https://arxiv.org/abs/2506.17201"><img src="https://img.shields.io/badge/ArXiv-2506.17201-red"></a>  
</div>
<div align="center">
<a href="https://huggingface.co/tencent/Hunyuan-GameCraft-1.0"><img src="https://img.shields.io/static/v1?label=Huggingface&message=Hunyuan-GameCraft-1.0&color=yellow"></a>  
</div>

> [**Hunyuan-GameCraft: High-dynamic Interactive Game Video Generation with Hybrid History Condition**](https://arxiv.org/abs/2506.17201) <be>
## 🔥🔥🔥 News!!
* Aug 14, 2025: 👋 We release the inference code and model weights of Hunyuan-GameCraft. [Download](weights/README.md).
## 📑 Open-source Plan
- Hunyuan-GameCraft
- [x] Inference
- [x] Checkpoints
- [ ] Gradio & Huggingface Demo
## Contents
- [**Hunyuan-GameCraft** 🌅](#Hunyuan-GameCraft-)
- [🔥🔥🔥 News!!](#-news)
- [📑 Open-source Plan](#-open-source-plan)
- [Contents](#contents)
- [**Abstract**](#abstract)
- [**Overall Architecture**](#Hunyuan-GameCraft-overall-architecture)
- [📜 Requirements](#-requirements)
- [🛠️ Dependencies and Installation](#️-dependencies-and-installation)
- [Installation Guide for Linux](#installation-guide-for-linux)
- [🧱 Download Pretrained Models](#-download-pretrained-models)
- [🚀 Parallel Inference on Multiple GPUs](#-parallel-inference-on-multiple-gpus)
- [🔑 Single-gpu Inference](#-single-gpu-inference)
- [Run with very low VRAM](#run-with-very-low-vram)
- [Run a Gradio Server](#run-a-gradio-server)
- [🔗 BibTeX](#-bibtex)
- [Acknowledgements](#acknowledgements)
---
## **Abstract**
Recent advances in diffusion-based and controllable video generation have enabled high-quality and temporally coherent video synthesis, laying the groundwork for immersive interactive gaming experiences. However, current methods face limitations in **dynamics**, **physically realistic**, **long-term consistency**, and **efficiency**, which limit the ability to create various gameplay videos. To address these gaps, we introduce Hunyuan-GameCraft, a novel framework for high-dynamic interactive video generation in game environments. To achieve fine-grained action control, we unify standard keyboard and mouse inputs into a **shared camera representation space**, facilitating smooth interpolation between various camera and movement operations. Then we propose a **hybrid history-conditioned training strategy** that extends video sequences autoregressively while preserving game scene information. Additionally, to enhance inference efficiency and playability, we achieve **model distillation** to reduce computational overhead while maintaining consistency across long temporal sequences, making it suitable for real-time deployment in complex interactive environments. The model is trained on a large-scale dataset comprising over one million gameplay recordings across over 100 AAA games, ensuring broad coverage and diversity, then fine-tuned on a carefully annotated synthetic dataset to enhance precision and control. The curated game scene data significantly improves the visual fidelity, realism and action controllability. Extensive experiments demonstrate that Hunyuan-GameCraft significantly outperforms existing models, advancing the realism and playability of interactive game video generation.
## **Overall Architecture**

Given a reference image and the corresponding prompt, the keyboard or mouse signal, we transform these options to the continuous camera space. Then we design a light-weight action encoder to encode the input camera trajectory. The action and image features are added after patchify. For long video extension, we design a variable mask indicator, where 1 and 0 indicate history frames and predicted frames, respectively.
## 📜 Requirements
* An NVIDIA GPU with CUDA support is required.
* The model is tested on a machine with 8GPUs.
* **Minimum**: The minimum GPU memory required is 24GB but very slow.
* **Recommended**: We recommend using a GPU with 80GB of memory for better generation quality.
* Tested operating system: Linux
## 🛠️ Dependencies and Installation
Begin by cloning the repository:
```shell
git clone https://github.com/Tencent-Hunyuan/Hunyuan-GameCraft-1.0.git
cd Hunyuan-GameCraft-1.0
```
### Installation Guide for Linux
We recommend CUDA versions 12.4 for the manual installation.
Conda's installation instructions are available [here](https://docs.anaconda.com/free/miniconda/index.html).
```shell
# 1. Create conda environment
conda create -n HYGameCraft python==3.10
# 2. Activate the environment
conda activate HYGameCraft
# 3. Install PyTorch and other dependencies using conda
conda install pytorch==2.5.1 torchvision==0.20.0 torchaudio==2.5.1 pytorch-cuda=12.4 -c pytorch -c nvidia
# 4. Install pip dependencies
python -m pip install -r requirements.txt
# 5. Install flash attention v2 for acceleration (requires CUDA 11.8 or above)
python -m pip install ninja
python -m pip install git+https://github.com/Dao-AILab/[email protected]
```
Additionally, you can also use HunyuanVideo Docker image. Use the following command to pull and run the docker image.
```shell
# For CUDA 12.4 (updated to avoid float point exception)
docker pull hunyuanvideo/hunyuanvideo:cuda_12
docker run -itd --gpus all --init --net=host --uts=host --ipc=host --name hunyuanvideo --security-opt=seccomp=unconfined --ulimit=stack=67108864 --ulimit=memlock=-1 --privileged hunyuanvideo/hunyuanvideo:cuda_12
pip install diffusers==0.34.0 transformers==4.54.1
```
## 🚀 Parallel Inference on Multiple GPUs
For example, to generate a video using 8 GPUs, you can use the following command, where `--action-list w s d a` simulate keyboard manipulation signals to help you generate a video of the corresponding content. `--action-speed-list 0.2 0.2 0.2 0.2` represents the displacement distance and can be replaced with any value between 0 and 3, the length of `action-speed-list` must be the same as `action-list`:
```bash
#!/bin/bash
JOBS_DIR=$(dirname $(dirname "$0"))
export PYTHONPATH=${JOBS_DIR}:$PYTHONPATH
export MODEL_BASE="weights/stdmodels"
checkpoint_path="weights/gamecraft_models/mp_rank_00_model_states.pt"
current_time=$(date "+%Y.%m.%d-%H.%M.%S")
modelname='Tencent_hunyuanGameCraft_720P'
torchrun --nnodes=1 --nproc_per_node=8 --master_port 29605 hymm_sp/sample_batch.py \
--image-path "asset/village.png" \
--prompt "A charming medieval village with cobblestone streets, thatched-roof houses, and vibrant flower gardens under a bright blue sky." \
--add-pos-prompt "Realistic, High-quality." \
--add-neg-prompt "overexposed, low quality, deformation, a poor composition, bad hands, bad teeth, bad eyes, bad limbs, distortion, blurring, text, subtitles, static, picture, black border." \
--ckpt ${checkpoint_path} \
--video-size 704 1216 \
--cfg-scale 2.0 \
--image-start \
--action-list w s d a \
--action-speed-list 0.2 0.2 0.2 0.2 \
--seed 250160 \
--infer-steps 50 \
--flow-shift-eval-video 5.0 \
--save-path './results/'
```
Additionally, we support FP8 optimization and [SageAttn](https://github.com/thu-ml/SageAttention). To enable FP8, simply add the `--use-fp8` to your command.
And install SageAttention with:
```bash
git clone https://github.com/thu-ml/SageAttention.git
cd SageAttention
python setup.py install # or pip install -e .
```
We also provide accelerated model, you can use the following command:
```bash
#!/bin/bash
JOBS_DIR=$(dirname $(dirname "$0"))
export PYTHONPATH=${JOBS_DIR}:$PYTHONPATH
export MODEL_BASE="weights/stdmodels"
checkpoint_path="weights/gamecraft_models/mp_rank_00_model_states_distill.pt"
current_time=$(date "+%Y.%m.%d-%H.%M.%S")
modelname='Tencent_hunyuanGameCraft_720P'
torchrun --nnodes=1 --nproc_per_node=8 --master_port 29605 hymm_sp/sample_batch.py \
--image-path "asset/village.png" \
--prompt "A charming medieval village with cobblestone streets, thatched-roof houses, and vibrant flower gardens under a bright blue sky." \
--add-neg-prompt "overexposed, low quality, deformation, a poor composition, bad hands, bad teeth, bad eyes, bad limbs, distortion, blurring, text, subtitles, static, picture, black border." \
--ckpt ${checkpoint_path} \
--video-size 704 1216 \
--cfg-scale 1.0 \
--image-start \
--action-list w s d a \
--action-speed-list 0.2 0.2 0.2 0.2 \
--seed 250160 \
--infer-steps 8 \
--use-fp8 \
--flow-shift-eval-video 5.0 \
--save-path './results_distill/'
```
## 🔑 Single-gpu with Low-VRAM Inference
For example, to generate a video with 1 GPU with Low-VRAM (over 24GB), you can use the following command:
```bash
#!/bin/bash
JOBS_DIR=$(dirname $(dirname "$0"))
export PYTHONPATH=${JOBS_DIR}:$PYTHONPATH
export MODEL_BASE="weights/stdmodels"
checkpoint_path="weights/gamecraft_models/mp_rank_00_model_states.pt"
current_time=$(date "+%Y.%m.%d-%H.%M.%S")
modelname='Tencent_hunyuanGameCraft_720P'
# disable sp and cpu offload
export DISABLE_SP=1
export CPU_OFFLOAD=1
torchrun --nnodes=1 --nproc_per_node=1 --master_port 29605 hymm_sp/sample_batch.py \
--image-path "asset/village.png" \
--prompt "A charming medieval village with cobblestone streets, thatched-roof houses, and vibrant flower gardens under a bright blue sky." \
--add-neg-prompt "overexposed, low quality, deformation, a poor composition, bad hands, bad teeth, bad eyes, bad limbs, distortion, blurring, text, subtitles, static, picture, black border." \
--ckpt ${checkpoint_path} \
--video-size 704 1216 \
--cfg-scale 2.0 \
--image-start \
--action-list w a d s \
--action-speed-list 0.2 0.2 0.2 0.2 \
--seed 250160 \
--sample-n-frames 33 \
--infer-steps 50 \
--flow-shift-eval-video 5.0 \
--cpu-offload \
--use-fp8 \
--save-path './results/'
```
## 🔗 BibTeX
If you find [Hunyuan-GameCraft](https://arxiv.org/abs/2506.17201) useful for your research and applications, please cite using this BibTeX:
```BibTeX
@misc{li2025hunyuangamecrafthighdynamicinteractivegame,
title={Hunyuan-GameCraft: High-dynamic Interactive Game Video Generation with Hybrid History Condition},
author={Jiaqi Li and Junshu Tang and Zhiyong Xu and Longhuang Wu and Yuan Zhou and Shuai Shao and Tianbao Yu and Zhiguo Cao and Qinglin Lu},
year={2025},
eprint={2506.17201},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2506.17201},
}
```
## Acknowledgements
We would like to thank the contributors to the [HunyuanVideo](https://github.com/Tencent/HunyuanVideo), [HunyuanVideo-Avatar](https://github.com/Tencent-Hunyuan/HunyuanVideo-Avatar),[SD3](https://huggingface.co/stabilityai/stable-diffusion-3-medium), [FLUX](https://github.com/black-forest-labs/flux), [Llama](https://github.com/meta-llama/llama), [LLaVA](https://github.com/haotian-liu/LLaVA), [Xtuner](https://github.com/InternLM/xtuner), [diffusers](https://github.com/huggingface/diffusers) and [HuggingFace](https://huggingface.co) repositories, for their open research and exploration.
|
ryanyang2025/butterfly-v1-7B
|
ryanyang2025
| 2025-08-19T06:16:45Z
| 0
| 1
|
transformers
|
[
"transformers",
"code",
"text-classification",
"en",
"dataset:EleutherAI/pile",
"arxiv:1910.09700",
"base_model:deepseek-ai/DeepSeek-R1",
"base_model:finetune:deepseek-ai/DeepSeek-R1",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-19T06:08:57Z
|
---
license: mit
datasets:
- EleutherAI/pile
language:
- en
base_model:
- deepseek-ai/DeepSeek-R1
pipeline_tag: text-classification
library_name: transformers
tags:
- code
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
subsectmusic/qwriko4b-64k-2507-instruct
|
subsectmusic
| 2025-08-19T06:15:30Z
| 0
| 0
|
transformers
|
[
"transformers",
"gguf",
"qwen3",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-08-19T05:07:14Z
|
---
base_model: unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** subsectmusic
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
valuesimplex-ai-lab/FinBERT1-base
|
valuesimplex-ai-lab
| 2025-08-19T06:14:50Z
| 0
| 1
| null |
[
"pytorch",
"safetensors",
"bert",
"finance",
"zh",
"base_model:google-bert/bert-base-chinese",
"base_model:finetune:google-bert/bert-base-chinese",
"license:apache-2.0",
"region:us"
] | null | 2025-08-17T10:03:57Z
|
---
license: apache-2.0
language:
- zh
base_model: google-bert/bert-base-chinese
tags:
- finance
---
## Model Details
**FinBERT1-Base** is a financial domain-adapted Chinese language model. Built on Google's BERT-Base architecture, it was continually pretrained on large-scale Chinese financial corpora to enhance financial text understanding.
- **Developed by:** See [valuesimplex](https://github.com/valuesimplex) for model developers
- **Model Type:** Transformer-based language model
- **Language(s):** Chinese
- **Parent Model:** See the [bert-base-chinese](https://huggingface.co/google-bert/bert-base-chinese) for more information about the BERT base model.
- **Resources:** [https://github.com/valuesimplex/FinBERT](https://github.com/valuesimplex/FinBERT)
## Direct Use
```python
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("valuesimplex-ai-lab/FinBERT1-base")
tokenizer = AutoTokenizer.from_pretrained("valuesimplex-ai-lab/FinBERT1-base")
```
### Further Usage
continual pre-training or fine-tuning:https://github.com/valuesimplex/FinBERT
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755584027
|
0xaoyama
| 2025-08-19T06:14:30Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:14:15Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755583947
|
IvanJAjebu
| 2025-08-19T06:14:14Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:13:48Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nightmedia/Cydonia-24B-v4.1-q5-mlx
|
nightmedia
| 2025-08-19T06:12:41Z
| 0
| 0
|
mlx
|
[
"mlx",
"safetensors",
"mistral",
"text-generation",
"conversational",
"base_model:TheDrummer/Cydonia-24B-v4.1",
"base_model:quantized:TheDrummer/Cydonia-24B-v4.1",
"5-bit",
"region:us"
] |
text-generation
| 2025-08-19T05:09:58Z
|
---
base_model: TheDrummer/Cydonia-24B-v4.1
pipeline_tag: text-generation
library_name: mlx
tags:
- mlx
---
# Cydonia-24B-v4.1-q5-mlx
This model [Cydonia-24B-v4.1-q5-mlx](https://huggingface.co/Cydonia-24B-v4.1-q5-mlx) was
converted to MLX format from [TheDrummer/Cydonia-24B-v4.1](https://huggingface.co/TheDrummer/Cydonia-24B-v4.1)
using mlx-lm version **0.26.3**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Cydonia-24B-v4.1-q5-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
WenFengg/21_14l2_19_8_
|
WenFengg
| 2025-08-19T06:12:11Z
| 0
| 0
| null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-19T06:03:27Z
|
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
donoway/ARC-Challenge_Llama-3.2-1B-5v3zw441
|
donoway
| 2025-08-19T06:11:17Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T05:54:42Z
|
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Challenge_Llama-3.2-1B-5v3zw441
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Challenge_Llama-3.2-1B-5v3zw441
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 9.1702
- Model Preparation Time: 0.0072
- Mdl: 3955.7186
- Accumulated Loss: 2741.8952
- Correct Preds: 92.0
- Total Preds: 299.0
- Accuracy: 0.3077
- Correct Gen Preds: 65.0
- Gen Accuracy: 0.2174
- Correct Gen Preds 32: 0.0
- Correct Preds 32: 0.0
- Total Labels 32: 64.0
- Accuracy 32: 0.0
- Gen Accuracy 32: 0.0
- Correct Gen Preds 33: 16.0
- Correct Preds 33: 35.0
- Total Labels 33: 73.0
- Accuracy 33: 0.4795
- Gen Accuracy 33: 0.2192
- Correct Gen Preds 34: 45.0
- Correct Preds 34: 52.0
- Total Labels 34: 78.0
- Accuracy 34: 0.6667
- Gen Accuracy 34: 0.5769
- Correct Gen Preds 35: 4.0
- Correct Preds 35: 5.0
- Total Labels 35: 83.0
- Accuracy 35: 0.0602
- Gen Accuracy 35: 0.0482
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 1.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.6389 | 0.0072 | 706.9523 | 490.0220 | 66.0 | 299.0 | 0.2207 | 66.0 | 0.2207 | 62.0 | 62.0 | 64.0 | 0.9688 | 0.9688 | 0.0 | 0.0 | 73.0 | 0.0 | 0.0 | 4.0 | 4.0 | 78.0 | 0.0513 | 0.0513 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.8059 | 1.0 | 1 | 1.6389 | 0.0072 | 706.9523 | 490.0220 | 66.0 | 299.0 | 0.2207 | 66.0 | 0.2207 | 62.0 | 62.0 | 64.0 | 0.9688 | 0.9688 | 0.0 | 0.0 | 73.0 | 0.0 | 0.0 | 4.0 | 4.0 | 78.0 | 0.0513 | 0.0513 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.8059 | 2.0 | 2 | 2.3768 | 0.0072 | 1025.2880 | 710.6755 | 73.0 | 299.0 | 0.2441 | 73.0 | 0.2441 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 73.0 | 73.0 | 73.0 | 1.0 | 1.0 | 0.0 | 0.0 | 78.0 | 0.0 | 0.0 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 1.4788 | 3.0 | 3 | 1.5504 | 0.0072 | 668.7983 | 463.5757 | 88.0 | 299.0 | 0.2943 | 87.0 | 0.2910 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 20.0 | 21.0 | 73.0 | 0.2877 | 0.2740 | 52.0 | 52.0 | 78.0 | 0.6667 | 0.6667 | 15.0 | 15.0 | 83.0 | 0.1807 | 0.1807 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.8828 | 4.0 | 4 | 1.8422 | 0.0072 | 794.6544 | 550.8124 | 71.0 | 299.0 | 0.2375 | 51.0 | 0.1706 | 1.0 | 4.0 | 64.0 | 0.0625 | 0.0156 | 49.0 | 65.0 | 73.0 | 0.8904 | 0.6712 | 1.0 | 2.0 | 78.0 | 0.0256 | 0.0128 | 0.0 | 0.0 | 83.0 | 0.0 | 0.0 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.2786 | 5.0 | 5 | 2.3001 | 0.0072 | 992.1852 | 687.7304 | 82.0 | 299.0 | 0.2742 | 77.0 | 0.2575 | 0.0 | 2.0 | 64.0 | 0.0312 | 0.0 | 66.0 | 69.0 | 73.0 | 0.9452 | 0.9041 | 10.0 | 10.0 | 78.0 | 0.1282 | 0.1282 | 1.0 | 1.0 | 83.0 | 0.0120 | 0.0120 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0357 | 6.0 | 6 | 3.5008 | 0.0072 | 1510.1111 | 1046.7292 | 76.0 | 299.0 | 0.2542 | 60.0 | 0.2007 | 0.0 | 1.0 | 64.0 | 0.0156 | 0.0 | 37.0 | 51.0 | 73.0 | 0.6986 | 0.5068 | 21.0 | 21.0 | 78.0 | 0.2692 | 0.2692 | 2.0 | 3.0 | 83.0 | 0.0361 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0004 | 7.0 | 7 | 4.9923 | 0.0072 | 2153.5282 | 1492.7120 | 83.0 | 299.0 | 0.2776 | 66.0 | 0.2207 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 35.0 | 48.0 | 73.0 | 0.6575 | 0.4795 | 29.0 | 32.0 | 78.0 | 0.4103 | 0.3718 | 2.0 | 3.0 | 83.0 | 0.0361 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 8.0 | 8 | 6.1504 | 0.0072 | 2653.0578 | 1838.9595 | 91.0 | 299.0 | 0.3043 | 72.0 | 0.2408 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 33.0 | 46.0 | 73.0 | 0.6301 | 0.4521 | 37.0 | 40.0 | 78.0 | 0.5128 | 0.4744 | 2.0 | 5.0 | 83.0 | 0.0602 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 9.0 | 9 | 6.9987 | 0.0072 | 3018.9984 | 2092.6102 | 90.0 | 299.0 | 0.3010 | 70.0 | 0.2341 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 29.0 | 42.0 | 73.0 | 0.5753 | 0.3973 | 39.0 | 44.0 | 78.0 | 0.5641 | 0.5 | 2.0 | 4.0 | 83.0 | 0.0482 | 0.0241 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 10.0 | 10 | 7.6103 | 0.0072 | 3282.8440 | 2275.4941 | 90.0 | 299.0 | 0.3010 | 69.0 | 0.2308 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 24.0 | 39.0 | 73.0 | 0.5342 | 0.3288 | 42.0 | 45.0 | 78.0 | 0.5769 | 0.5385 | 3.0 | 6.0 | 83.0 | 0.0723 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 11.0 | 11 | 8.0287 | 0.0072 | 3463.2859 | 2400.5668 | 88.0 | 299.0 | 0.2943 | 70.0 | 0.2341 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 23.0 | 35.0 | 73.0 | 0.4795 | 0.3151 | 44.0 | 48.0 | 78.0 | 0.6154 | 0.5641 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 12 | 8.4104 | 0.0072 | 3627.9392 | 2514.6958 | 88.0 | 299.0 | 0.2943 | 67.0 | 0.2241 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 22.0 | 34.0 | 73.0 | 0.4658 | 0.3014 | 42.0 | 49.0 | 78.0 | 0.6282 | 0.5385 | 3.0 | 5.0 | 83.0 | 0.0602 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 13 | 8.6021 | 0.0072 | 3710.6622 | 2572.0350 | 91.0 | 299.0 | 0.3043 | 70.0 | 0.2341 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 23.0 | 35.0 | 73.0 | 0.4795 | 0.3151 | 44.0 | 50.0 | 78.0 | 0.6410 | 0.5641 | 3.0 | 6.0 | 83.0 | 0.0723 | 0.0361 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 14 | 8.7289 | 0.0072 | 3765.3495 | 2609.9414 | 91.0 | 299.0 | 0.3043 | 69.0 | 0.2308 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 22.0 | 36.0 | 73.0 | 0.4932 | 0.3014 | 43.0 | 49.0 | 78.0 | 0.6282 | 0.5513 | 4.0 | 6.0 | 83.0 | 0.0723 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 15 | 8.7814 | 0.0072 | 3788.0128 | 2625.6504 | 91.0 | 299.0 | 0.3043 | 68.0 | 0.2274 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 20.0 | 35.0 | 73.0 | 0.4795 | 0.2740 | 44.0 | 50.0 | 78.0 | 0.6410 | 0.5641 | 4.0 | 6.0 | 83.0 | 0.0723 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 16.0 | 16 | 8.8823 | 0.0072 | 3831.5105 | 2655.8007 | 90.0 | 299.0 | 0.3010 | 66.0 | 0.2207 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 18.0 | 35.0 | 73.0 | 0.4795 | 0.2466 | 44.0 | 49.0 | 78.0 | 0.6282 | 0.5641 | 4.0 | 6.0 | 83.0 | 0.0723 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 17.0 | 17 | 8.9496 | 0.0072 | 3860.5383 | 2675.9212 | 89.0 | 299.0 | 0.2977 | 65.0 | 0.2174 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 17.0 | 35.0 | 73.0 | 0.4795 | 0.2329 | 44.0 | 49.0 | 78.0 | 0.6282 | 0.5641 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 18.0 | 18 | 9.0121 | 0.0072 | 3887.5219 | 2694.6248 | 91.0 | 299.0 | 0.3043 | 64.0 | 0.2140 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 15.0 | 35.0 | 73.0 | 0.4795 | 0.2055 | 45.0 | 50.0 | 78.0 | 0.6410 | 0.5769 | 4.0 | 6.0 | 83.0 | 0.0723 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 19.0 | 19 | 9.0640 | 0.0072 | 3909.8990 | 2710.1354 | 90.0 | 299.0 | 0.3010 | 66.0 | 0.2207 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 16.0 | 35.0 | 73.0 | 0.4795 | 0.2192 | 46.0 | 50.0 | 78.0 | 0.6410 | 0.5897 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 20.0 | 20 | 9.0766 | 0.0072 | 3915.3382 | 2713.9056 | 90.0 | 299.0 | 0.3010 | 66.0 | 0.2207 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 16.0 | 35.0 | 73.0 | 0.4795 | 0.2192 | 46.0 | 50.0 | 78.0 | 0.6410 | 0.5897 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 21.0 | 21 | 9.1070 | 0.0072 | 3928.4656 | 2723.0049 | 90.0 | 299.0 | 0.3010 | 65.0 | 0.2174 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 15.0 | 35.0 | 73.0 | 0.4795 | 0.2055 | 46.0 | 50.0 | 78.0 | 0.6410 | 0.5897 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 22.0 | 22 | 9.1207 | 0.0072 | 3934.3616 | 2727.0917 | 91.0 | 299.0 | 0.3043 | 66.0 | 0.2207 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 17.0 | 35.0 | 73.0 | 0.4795 | 0.2329 | 45.0 | 51.0 | 78.0 | 0.6538 | 0.5769 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 23.0 | 23 | 9.1378 | 0.0072 | 3941.7153 | 2732.1888 | 91.0 | 299.0 | 0.3043 | 64.0 | 0.2140 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 15.0 | 35.0 | 73.0 | 0.4795 | 0.2055 | 45.0 | 51.0 | 78.0 | 0.6538 | 0.5769 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 24.0 | 24 | 9.1702 | 0.0072 | 3955.7186 | 2741.8952 | 92.0 | 299.0 | 0.3077 | 65.0 | 0.2174 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 16.0 | 35.0 | 73.0 | 0.4795 | 0.2192 | 45.0 | 52.0 | 78.0 | 0.6667 | 0.5769 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 25.0 | 25 | 9.1845 | 0.0072 | 3961.8830 | 2746.1680 | 92.0 | 299.0 | 0.3077 | 65.0 | 0.2174 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 15.0 | 35.0 | 73.0 | 0.4795 | 0.2055 | 46.0 | 52.0 | 78.0 | 0.6667 | 0.5897 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 26.0 | 26 | 9.2133 | 0.0072 | 3974.2828 | 2754.7629 | 90.0 | 299.0 | 0.3010 | 65.0 | 0.2174 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 16.0 | 35.0 | 73.0 | 0.4795 | 0.2192 | 45.0 | 50.0 | 78.0 | 0.6410 | 0.5769 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 27.0 | 27 | 9.1654 | 0.0072 | 3953.6505 | 2740.4617 | 92.0 | 299.0 | 0.3077 | 66.0 | 0.2207 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 16.0 | 35.0 | 73.0 | 0.4795 | 0.2192 | 46.0 | 52.0 | 78.0 | 0.6667 | 0.5897 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 28.0 | 28 | 9.1935 | 0.0072 | 3965.7623 | 2748.8569 | 92.0 | 299.0 | 0.3077 | 66.0 | 0.2207 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 17.0 | 35.0 | 73.0 | 0.4795 | 0.2329 | 45.0 | 52.0 | 78.0 | 0.6667 | 0.5769 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 29.0 | 29 | 9.1695 | 0.0072 | 3955.4172 | 2741.6863 | 92.0 | 299.0 | 0.3077 | 67.0 | 0.2241 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 16.0 | 35.0 | 73.0 | 0.4795 | 0.2192 | 47.0 | 52.0 | 78.0 | 0.6667 | 0.6026 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 30.0 | 30 | 9.1911 | 0.0072 | 3964.7189 | 2748.1337 | 92.0 | 299.0 | 0.3077 | 65.0 | 0.2174 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 15.0 | 35.0 | 73.0 | 0.4795 | 0.2055 | 46.0 | 52.0 | 78.0 | 0.6667 | 0.5897 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 31.0 | 31 | 9.1985 | 0.0072 | 3967.9350 | 2750.3630 | 91.0 | 299.0 | 0.3043 | 67.0 | 0.2241 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 17.0 | 35.0 | 73.0 | 0.4795 | 0.2329 | 46.0 | 51.0 | 78.0 | 0.6538 | 0.5897 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 32.0 | 32 | 9.1956 | 0.0072 | 3966.6719 | 2749.4874 | 92.0 | 299.0 | 0.3077 | 66.0 | 0.2207 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 16.0 | 35.0 | 73.0 | 0.4795 | 0.2192 | 46.0 | 52.0 | 78.0 | 0.6667 | 0.5897 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 33.0 | 33 | 9.2326 | 0.0072 | 3982.6086 | 2760.5339 | 91.0 | 299.0 | 0.3043 | 66.0 | 0.2207 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 16.0 | 35.0 | 73.0 | 0.4795 | 0.2192 | 46.0 | 51.0 | 78.0 | 0.6538 | 0.5897 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 34.0 | 34 | 9.1921 | 0.0072 | 3965.1599 | 2748.4394 | 92.0 | 299.0 | 0.3077 | 66.0 | 0.2207 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 15.0 | 35.0 | 73.0 | 0.4795 | 0.2055 | 47.0 | 52.0 | 78.0 | 0.6667 | 0.6026 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 35.0 | 35 | 9.2354 | 0.0072 | 3983.8575 | 2761.3996 | 91.0 | 299.0 | 0.3043 | 65.0 | 0.2174 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 15.0 | 35.0 | 73.0 | 0.4795 | 0.2055 | 46.0 | 51.0 | 78.0 | 0.6538 | 0.5897 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 36.0 | 36 | 9.2211 | 0.0072 | 3977.6828 | 2757.1196 | 91.0 | 299.0 | 0.3043 | 65.0 | 0.2174 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 16.0 | 35.0 | 73.0 | 0.4795 | 0.2192 | 45.0 | 51.0 | 78.0 | 0.6538 | 0.5769 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 37.0 | 37 | 9.2311 | 0.0072 | 3981.9701 | 2760.0914 | 91.0 | 299.0 | 0.3043 | 66.0 | 0.2207 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 16.0 | 34.0 | 73.0 | 0.4658 | 0.2192 | 46.0 | 52.0 | 78.0 | 0.6667 | 0.5897 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 38.0 | 38 | 9.2726 | 0.0072 | 3999.8736 | 2772.5011 | 91.0 | 299.0 | 0.3043 | 67.0 | 0.2241 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 16.0 | 34.0 | 73.0 | 0.4658 | 0.2192 | 47.0 | 52.0 | 78.0 | 0.6667 | 0.6026 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 39.0 | 39 | 9.2677 | 0.0072 | 3997.7545 | 2771.0322 | 92.0 | 299.0 | 0.3077 | 67.0 | 0.2241 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 16.0 | 35.0 | 73.0 | 0.4795 | 0.2192 | 47.0 | 52.0 | 78.0 | 0.6667 | 0.6026 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 40.0 | 40 | 9.2004 | 0.0072 | 3968.7216 | 2750.9082 | 92.0 | 299.0 | 0.3077 | 66.0 | 0.2207 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 16.0 | 35.0 | 73.0 | 0.4795 | 0.2192 | 46.0 | 52.0 | 78.0 | 0.6667 | 0.5897 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 41.0 | 41 | 9.2552 | 0.0072 | 3992.3608 | 2767.2936 | 92.0 | 299.0 | 0.3077 | 65.0 | 0.2174 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 15.0 | 35.0 | 73.0 | 0.4795 | 0.2055 | 46.0 | 52.0 | 78.0 | 0.6667 | 0.5897 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 42.0 | 42 | 9.2246 | 0.0072 | 3979.1888 | 2758.1635 | 92.0 | 299.0 | 0.3077 | 65.0 | 0.2174 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 15.0 | 35.0 | 73.0 | 0.4795 | 0.2055 | 46.0 | 52.0 | 78.0 | 0.6667 | 0.5897 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 43.0 | 43 | 9.2157 | 0.0072 | 3975.3528 | 2755.5046 | 91.0 | 299.0 | 0.3043 | 69.0 | 0.2308 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 18.0 | 35.0 | 73.0 | 0.4795 | 0.2466 | 47.0 | 51.0 | 78.0 | 0.6538 | 0.6026 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 44.0 | 44 | 9.2293 | 0.0072 | 3981.2013 | 2759.5585 | 91.0 | 299.0 | 0.3043 | 68.0 | 0.2274 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 17.0 | 34.0 | 73.0 | 0.4658 | 0.2329 | 47.0 | 52.0 | 78.0 | 0.6667 | 0.6026 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 45.0 | 45 | 9.2613 | 0.0072 | 3994.9932 | 2769.1183 | 92.0 | 299.0 | 0.3077 | 68.0 | 0.2274 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 17.0 | 35.0 | 73.0 | 0.4795 | 0.2329 | 47.0 | 52.0 | 78.0 | 0.6667 | 0.6026 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 46.0 | 46 | 9.2348 | 0.0072 | 3983.5662 | 2761.1977 | 91.0 | 299.0 | 0.3043 | 68.0 | 0.2274 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 17.0 | 34.0 | 73.0 | 0.4658 | 0.2329 | 47.0 | 52.0 | 78.0 | 0.6667 | 0.6026 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 47.0 | 47 | 9.2317 | 0.0072 | 3982.2565 | 2760.2899 | 92.0 | 299.0 | 0.3077 | 66.0 | 0.2207 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 16.0 | 35.0 | 73.0 | 0.4795 | 0.2192 | 46.0 | 52.0 | 78.0 | 0.6667 | 0.5897 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 48.0 | 48 | 9.2409 | 0.0072 | 3986.1969 | 2763.0211 | 92.0 | 299.0 | 0.3077 | 67.0 | 0.2241 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 16.0 | 35.0 | 73.0 | 0.4795 | 0.2192 | 47.0 | 52.0 | 78.0 | 0.6667 | 0.6026 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 49.0 | 49 | 9.2229 | 0.0072 | 3978.4535 | 2757.6538 | 91.0 | 299.0 | 0.3043 | 66.0 | 0.2207 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 15.0 | 34.0 | 73.0 | 0.4658 | 0.2055 | 47.0 | 52.0 | 78.0 | 0.6667 | 0.6026 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 50.0 | 50 | 9.2700 | 0.0072 | 3998.7541 | 2771.7251 | 91.0 | 299.0 | 0.3043 | 68.0 | 0.2274 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 18.0 | 34.0 | 73.0 | 0.4658 | 0.2466 | 46.0 | 52.0 | 78.0 | 0.6667 | 0.5897 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 51.0 | 51 | 9.2386 | 0.0072 | 3985.2168 | 2762.3418 | 92.0 | 299.0 | 0.3077 | 67.0 | 0.2241 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 17.0 | 35.0 | 73.0 | 0.4795 | 0.2329 | 46.0 | 52.0 | 78.0 | 0.6667 | 0.5897 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 52.0 | 52 | 9.2318 | 0.0072 | 3982.2619 | 2760.2936 | 92.0 | 299.0 | 0.3077 | 67.0 | 0.2241 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 16.0 | 35.0 | 73.0 | 0.4795 | 0.2192 | 47.0 | 52.0 | 78.0 | 0.6667 | 0.6026 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 53.0 | 53 | 9.2348 | 0.0072 | 3983.5704 | 2761.2006 | 91.0 | 299.0 | 0.3043 | 69.0 | 0.2308 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 18.0 | 34.0 | 73.0 | 0.4658 | 0.2466 | 47.0 | 52.0 | 78.0 | 0.6667 | 0.6026 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
| 0.0 | 54.0 | 54 | 9.2415 | 0.0072 | 3986.4454 | 2763.1934 | 91.0 | 299.0 | 0.3043 | 69.0 | 0.2308 | 0.0 | 0.0 | 64.0 | 0.0 | 0.0 | 19.0 | 34.0 | 73.0 | 0.4658 | 0.2603 | 46.0 | 52.0 | 78.0 | 0.6667 | 0.5897 | 4.0 | 5.0 | 83.0 | 0.0602 | 0.0482 | 0.0 | 0.0 | 1.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
colabbear/VARCO-VISION-2.0-1.7B-OCR-bnb-4bit
|
colabbear
| 2025-08-19T06:10:37Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llava_onevision",
"feature-extraction",
"bnb-my-repo",
"multimodal",
"OCR",
"ncsoft",
"ncai",
"varco",
"image-text-to-text",
"conversational",
"en",
"ko",
"arxiv:2408.03326",
"base_model:NCSOFT/VARCO-VISION-2.0-1.7B-OCR",
"base_model:quantized:NCSOFT/VARCO-VISION-2.0-1.7B-OCR",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
image-text-to-text
| 2025-08-19T06:10:30Z
|
---
base_model:
- NCSOFT/VARCO-VISION-2.0-1.7B-OCR
license: cc-by-nc-4.0
library_name: transformers
tags:
- bnb-my-repo
- multimodal
- OCR
- ncsoft
- ncai
- varco
pipeline_tag: image-text-to-text
language:
- en
- ko
---
# NCSOFT/VARCO-VISION-2.0-1.7B-OCR (Quantized)
## Description
This model is a quantized version of the original model [`NCSOFT/VARCO-VISION-2.0-1.7B-OCR`](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-1.7B-OCR).
It's quantized using the BitsAndBytes library to 4-bit using the [bnb-my-repo](https://huggingface.co/spaces/bnb-community/bnb-my-repo) space.
## Quantization Details
- **Quantization Type**: int4
- **bnb_4bit_quant_type**: nf4
- **bnb_4bit_use_double_quant**: True
- **bnb_4bit_compute_dtype**: bfloat16
- **bnb_4bit_quant_storage**: uint8
# 📄 Original Model Information
# VARCO-VISION-2.0-1.7B-OCR
<div align="center">
<img src="./varco-vision.png" width="100%" style="background-color:white; padding:10px;" />
</div>
## Introduction
**VARCO-VISION-2.0-1.7B-OCR** is a lightweight yet powerful OCR-specialized model derived from VARCO-VISION-2.0-1.7B, designed to deliver efficient and accurate text recognition in real-world scenarios. Unlike conventional vision-language models (VLMs) that primarily focus on transcribing visible text, this model performs both recognition and spatial localization by detecting bounding boxes around each character, enabling structured, layout-aware OCR outputs.
The model supports both Korean and English, making it well-suited for multilingual environments where mixed-script documents are common. Each recognized character is paired with its precise position in the image, formatted as `<char>{characters}</char><bbox>{x1}, {y1}, {x2}, {y2}</bbox>`, where the coordinates correspond to the top-left (`x1`, `y1`) and bottom-right (`x2`, `y2`) corners of the character's bounding box.
While VARCO-VISION-2.0-14B demonstrates strong OCR capabilities as part of its broader multimodal reasoning skills, deploying such a large model for single-task use cases can be computationally inefficient. VARCO-VISION-2.0-1.7B-OCR addresses this with a task-optimized design that retains high accuracy while significantly reducing resource requirements, making it ideal for real-time or resource-constrained applications.

## 🚨News🎙️
- 📰 2025-07-28: We released VARCO-VISION-2.0-1.7B-OCR at [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-1.7B-OCR)
- 📰 2025-07-28: We released VARCO-VISION-2.0-1.7B at [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-1.7B)
- 📰 2025-07-18: We updated the checkpoint of VARCO-VISION-2.0-14B for improved performance.
- 📰 2025-07-16: We released VARCO-VISION-2.0-14B at [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-14B)
- 📰 2025-07-16: We released GME-VARCO-VISION-Embedding at [link](https://huggingface.co/NCSOFT/GME-VARCO-VISION-Embedding)
## VARCO-VISION-2.0 Family
| Model Name | Base Models (Vision / Language) | HF Link |
| :------------------------: | :-------------------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------: |
| VARCO-VISION-2.0-14B | [siglip2-so400m-patch16-384](https://huggingface.co/google/siglip2-so400m-patch16-384) / [Qwen3-14B ](https://huggingface.co/Qwen/Qwen3-14B) | [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-14B) |
| VARCO-VISION-2.0-1.7B | [siglip2-so400m-patch16-384](https://huggingface.co/google/siglip2-so400m-patch16-384) / [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) | [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-1.7B) |
| VARCO-VISION-2.0-1.7B-OCR | [siglip2-so400m-patch16-384](https://huggingface.co/google/siglip2-so400m-patch16-384) / [Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) | [link](https://huggingface.co/NCSOFT/VARCO-VISION-2.0-1.7B-OCR) |
| GME-VARCO-VISION-Embedding | [Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct) | [link](https://huggingface.co/NCSOFT/GME-VARCO-VISION-Embedding) |
## Model Architecture
VARCO-VISION-2.0 follows the architecture of [LLaVA-OneVision](https://arxiv.org/abs/2408.03326).
## Evaluation
### OCR Benchmark
| Benchmark | CLOVA OCR | PaddleOCR | EasyOCR | VARCO-VISION-2.0-1.7B-OCR |
| :-------: | :--------:| :-------: | :-----: | :-----------------------: |
| CORD | *93.9* | 91.4 | 77.8 | **95.6** |
| ICDAR2013 | *94.4* | 92.0 | 85.0 | **95.5** |
| ICDAR2015 | **84.1** | 73.7 | 57.9 | *75.4* |
## Usage
To use this model, we recommend installing `transformers` version **4.53.1 or higher**.
Additionally, for best results, we **recommend upscaling input images to a minimum resolution of *2,304*** on the longer side if they are smaller.
```python
import torch
from PIL import Image
from transformers import AutoProcessor, LlavaOnevisionForConditionalGeneration
model_name = "NCSOFT/VARCO-VISION-2.0-1.7B-OCR"
model = LlavaOnevisionForConditionalGeneration.from_pretrained(
model_name,
torch_dtype=torch.float16,
attn_implementation="sdpa",
device_map="auto",
)
processor = AutoProcessor.from_pretrained(model_name)
image = Image.open("file:///path/to/image.jpg")
# Image upscaling for OCR performance boost
w, h = image.size
target_size = 2304
if max(w, h) < target_size:
scaling_factor = target_size / max(w, h)
new_w = int(w * scaling_factor)
new_h = int(h * scaling_factor)
image = image.resize((new_w, new_h))
conversation = [
{
"role": "user",
"content": [
{"type": "image", "image": image},
{"type": "text", "text": ""},
],
},
]
inputs = processor.apply_chat_template(
conversation,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt"
).to(model.device, torch.float16)
generate_ids = model.generate(**inputs, max_new_tokens=1024)
generate_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generate_ids)
]
output = processor.decode(generate_ids_trimmed[0], skip_special_tokens=False)
print(output)
```
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755583661
|
lqpl
| 2025-08-19T06:09:22Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:08:43Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755583614
|
IvanJAjebu
| 2025-08-19T06:08:47Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:08:20Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
resistz/sft_Llama-3.2-3B_ultra200k
|
resistz
| 2025-08-19T06:07:53Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T06:03:17Z
|
---
library_name: transformers
model_name: sft_Llama3.2-3B_ultra200k
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for sft_Llama3.2-3B_ultra200k
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/resistzzz97/Alignment_Influence/runs/wrdnrblz)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
BlazePro12/merged_grok_data_mcp_1
|
BlazePro12
| 2025-08-19T06:07:23Z
| 0
| 0
|
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T06:02:06Z
|
---
base_model: google/gemma-3-270m-it
library_name: transformers
model_name: merged_grok_data_mcp_1
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for merged_grok_data_mcp_1
This model is a fine-tuned version of [google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="BlazePro12/merged_grok_data_mcp_1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.2
- Pytorch: 2.8.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755583572
|
yaelahnal
| 2025-08-19T06:07:21Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:07:03Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kimono998/Wordle-curr-neg-3_lora_adapter_iter_30
|
kimono998
| 2025-08-19T06:07:17Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T06:07:13Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
bonnieliu2002/act_collect_empty_bottle_black_white_wrist_5k_bs8
|
bonnieliu2002
| 2025-08-19T06:06:51Z
| 0
| 0
|
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:bonnieliu2002/collect_empty_bottle_black_white_wrist",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-19T06:06:22Z
|
---
datasets: bonnieliu2002/collect_empty_bottle_black_white_wrist
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- lerobot
- robotics
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
donoway/ARC-Easy_Llama-3.2-1B-w2bxj3e2
|
donoway
| 2025-08-19T06:06:44Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:finetune:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T05:54:42Z
|
---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ARC-Easy_Llama-3.2-1B-w2bxj3e2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ARC-Easy_Llama-3.2-1B-w2bxj3e2
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4944
- Model Preparation Time: 0.0063
- Mdl: 2051.2357
- Accumulated Loss: 1421.8082
- Correct Preds: 394.0
- Total Preds: 570.0
- Accuracy: 0.6912
- Correct Gen Preds: 382.0
- Gen Accuracy: 0.6702
- Correct Gen Preds 32: 109.0
- Correct Preds 32: 116.0
- Total Labels 32: 158.0
- Accuracy 32: 0.7342
- Gen Accuracy 32: 0.6899
- Correct Gen Preds 33: 114.0
- Correct Preds 33: 117.0
- Total Labels 33: 152.0
- Accuracy 33: 0.7697
- Gen Accuracy 33: 0.75
- Correct Gen Preds 34: 99.0
- Correct Preds 34: 101.0
- Total Labels 34: 142.0
- Accuracy 34: 0.7113
- Gen Accuracy 34: 0.6972
- Correct Gen Preds 35: 60.0
- Correct Preds 35: 60.0
- Total Labels 35: 118.0
- Accuracy 35: 0.5085
- Gen Accuracy 35: 0.5085
- Correct Gen Preds 36: 0.0
- Correct Preds 36: 0.0
- Total Labels 36: 0.0
- Accuracy 36: 0.0
- Gen Accuracy 36: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 112
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Mdl | Accumulated Loss | Correct Preds | Total Preds | Accuracy | Correct Gen Preds | Gen Accuracy | Correct Gen Preds 32 | Correct Preds 32 | Total Labels 32 | Accuracy 32 | Gen Accuracy 32 | Correct Gen Preds 33 | Correct Preds 33 | Total Labels 33 | Accuracy 33 | Gen Accuracy 33 | Correct Gen Preds 34 | Correct Preds 34 | Total Labels 34 | Accuracy 34 | Gen Accuracy 34 | Correct Gen Preds 35 | Correct Preds 35 | Total Labels 35 | Accuracy 35 | Gen Accuracy 35 | Correct Gen Preds 36 | Correct Preds 36 | Total Labels 36 | Accuracy 36 | Gen Accuracy 36 |
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:---------:|:----------------:|:-------------:|:-----------:|:--------:|:-----------------:|:------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|:--------------------:|:----------------:|:---------------:|:-----------:|:---------------:|
| No log | 0 | 0 | 1.5354 | 0.0063 | 1262.6022 | 875.1692 | 172.0 | 570.0 | 0.3018 | 170.0 | 0.2982 | 154.0 | 154.0 | 158.0 | 0.9747 | 0.9747 | 0.0 | 0.0 | 152.0 | 0.0 | 0.0 | 15.0 | 17.0 | 142.0 | 0.1197 | 0.1056 | 1.0 | 1.0 | 118.0 | 0.0085 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4003 | 1.0 | 1 | 1.5354 | 0.0063 | 1262.6022 | 875.1692 | 172.0 | 570.0 | 0.3018 | 170.0 | 0.2982 | 154.0 | 154.0 | 158.0 | 0.9747 | 0.9747 | 0.0 | 0.0 | 152.0 | 0.0 | 0.0 | 15.0 | 17.0 | 142.0 | 0.1197 | 0.1056 | 1.0 | 1.0 | 118.0 | 0.0085 | 0.0085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.4003 | 2.0 | 2 | 2.5540 | 0.0063 | 2100.2576 | 1455.7876 | 152.0 | 570.0 | 0.2667 | 152.0 | 0.2667 | 0.0 | 0.0 | 158.0 | 0.0 | 0.0 | 152.0 | 152.0 | 152.0 | 1.0 | 1.0 | 0.0 | 0.0 | 142.0 | 0.0 | 0.0 | 0.0 | 0.0 | 118.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 1.9783 | 3.0 | 3 | 1.7445 | 0.0063 | 1434.5302 | 994.3405 | 164.0 | 570.0 | 0.2877 | 164.0 | 0.2877 | 151.0 | 151.0 | 158.0 | 0.9557 | 0.9557 | 13.0 | 13.0 | 152.0 | 0.0855 | 0.0855 | 0.0 | 0.0 | 142.0 | 0.0 | 0.0 | 0.0 | 0.0 | 118.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.8352 | 4.0 | 4 | 1.1644 | 0.0063 | 957.5167 | 663.7000 | 270.0 | 570.0 | 0.4737 | 270.0 | 0.4737 | 76.0 | 76.0 | 158.0 | 0.4810 | 0.4810 | 18.0 | 18.0 | 152.0 | 0.1184 | 0.1184 | 120.0 | 120.0 | 142.0 | 0.8451 | 0.8451 | 56.0 | 56.0 | 118.0 | 0.4746 | 0.4746 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.5867 | 5.0 | 5 | 1.2507 | 0.0063 | 1028.4822 | 712.8896 | 284.0 | 570.0 | 0.4982 | 275.0 | 0.4825 | 131.0 | 135.0 | 158.0 | 0.8544 | 0.8291 | 16.0 | 16.0 | 152.0 | 0.1053 | 0.1053 | 80.0 | 82.0 | 142.0 | 0.5775 | 0.5634 | 48.0 | 51.0 | 118.0 | 0.4322 | 0.4068 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.1943 | 6.0 | 6 | 1.3948 | 0.0063 | 1147.0146 | 795.0499 | 344.0 | 570.0 | 0.6035 | 241.0 | 0.4228 | 82.0 | 128.0 | 158.0 | 0.8101 | 0.5190 | 56.0 | 74.0 | 152.0 | 0.4868 | 0.3684 | 69.0 | 90.0 | 142.0 | 0.6338 | 0.4859 | 34.0 | 52.0 | 118.0 | 0.4407 | 0.2881 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.018 | 7.0 | 7 | 1.8824 | 0.0063 | 1548.0051 | 1072.9954 | 387.0 | 570.0 | 0.6789 | 365.0 | 0.6404 | 106.0 | 118.0 | 158.0 | 0.7468 | 0.6709 | 105.0 | 110.0 | 152.0 | 0.7237 | 0.6908 | 95.0 | 98.0 | 142.0 | 0.6901 | 0.6690 | 59.0 | 61.0 | 118.0 | 0.5169 | 0.5 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0005 | 8.0 | 8 | 2.4944 | 0.0063 | 2051.2357 | 1421.8082 | 394.0 | 570.0 | 0.6912 | 382.0 | 0.6702 | 109.0 | 116.0 | 158.0 | 0.7342 | 0.6899 | 114.0 | 117.0 | 152.0 | 0.7697 | 0.75 | 99.0 | 101.0 | 142.0 | 0.7113 | 0.6972 | 60.0 | 60.0 | 118.0 | 0.5085 | 0.5085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 9.0 | 9 | 2.9139 | 0.0063 | 2396.2351 | 1660.9436 | 393.0 | 570.0 | 0.6895 | 388.0 | 0.6807 | 115.0 | 118.0 | 158.0 | 0.7468 | 0.7278 | 117.0 | 117.0 | 152.0 | 0.7697 | 0.7697 | 94.0 | 96.0 | 142.0 | 0.6761 | 0.6620 | 62.0 | 62.0 | 118.0 | 0.5254 | 0.5254 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 10.0 | 10 | 3.2474 | 0.0063 | 2670.4270 | 1850.9989 | 389.0 | 570.0 | 0.6825 | 384.0 | 0.6737 | 118.0 | 120.0 | 158.0 | 0.7595 | 0.7468 | 117.0 | 117.0 | 152.0 | 0.7697 | 0.7697 | 87.0 | 90.0 | 142.0 | 0.6338 | 0.6127 | 62.0 | 62.0 | 118.0 | 0.5254 | 0.5254 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 11.0 | 11 | 3.5037 | 0.0063 | 2881.2221 | 1997.1110 | 390.0 | 570.0 | 0.6842 | 385.0 | 0.6754 | 121.0 | 122.0 | 158.0 | 0.7722 | 0.7658 | 117.0 | 117.0 | 152.0 | 0.7697 | 0.7697 | 85.0 | 89.0 | 142.0 | 0.6268 | 0.5986 | 62.0 | 62.0 | 118.0 | 0.5254 | 0.5254 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 12.0 | 12 | 3.6605 | 0.0063 | 3010.1267 | 2086.4608 | 390.0 | 570.0 | 0.6842 | 385.0 | 0.6754 | 123.0 | 124.0 | 158.0 | 0.7848 | 0.7785 | 117.0 | 117.0 | 152.0 | 0.7697 | 0.7697 | 83.0 | 87.0 | 142.0 | 0.6127 | 0.5845 | 62.0 | 62.0 | 118.0 | 0.5254 | 0.5254 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 13.0 | 13 | 3.8304 | 0.0063 | 3149.8940 | 2183.3402 | 391.0 | 570.0 | 0.6860 | 384.0 | 0.6737 | 125.0 | 126.0 | 158.0 | 0.7975 | 0.7911 | 116.0 | 116.0 | 152.0 | 0.7632 | 0.7632 | 82.0 | 87.0 | 142.0 | 0.6127 | 0.5775 | 61.0 | 62.0 | 118.0 | 0.5254 | 0.5169 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 14.0 | 14 | 3.8963 | 0.0063 | 3204.0477 | 2220.8766 | 391.0 | 570.0 | 0.6860 | 384.0 | 0.6737 | 124.0 | 127.0 | 158.0 | 0.8038 | 0.7848 | 115.0 | 115.0 | 152.0 | 0.7566 | 0.7566 | 85.0 | 87.0 | 142.0 | 0.6127 | 0.5986 | 60.0 | 62.0 | 118.0 | 0.5254 | 0.5085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 15.0 | 15 | 3.9921 | 0.0063 | 3282.8743 | 2275.5150 | 390.0 | 570.0 | 0.6842 | 383.0 | 0.6719 | 125.0 | 127.0 | 158.0 | 0.8038 | 0.7911 | 115.0 | 115.0 | 152.0 | 0.7566 | 0.7566 | 83.0 | 87.0 | 142.0 | 0.6127 | 0.5845 | 60.0 | 61.0 | 118.0 | 0.5169 | 0.5085 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 16.0 | 16 | 4.0738 | 0.0063 | 3350.0615 | 2322.0857 | 388.0 | 570.0 | 0.6807 | 380.0 | 0.6667 | 124.0 | 127.0 | 158.0 | 0.8038 | 0.7848 | 116.0 | 116.0 | 152.0 | 0.7632 | 0.7632 | 82.0 | 85.0 | 142.0 | 0.5986 | 0.5775 | 58.0 | 60.0 | 118.0 | 0.5085 | 0.4915 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 17.0 | 17 | 4.0949 | 0.0063 | 3367.3849 | 2334.0934 | 386.0 | 570.0 | 0.6772 | 378.0 | 0.6632 | 124.0 | 127.0 | 158.0 | 0.8038 | 0.7848 | 115.0 | 115.0 | 152.0 | 0.7566 | 0.7566 | 81.0 | 84.0 | 142.0 | 0.5915 | 0.5704 | 58.0 | 60.0 | 118.0 | 0.5085 | 0.4915 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 18.0 | 18 | 4.2008 | 0.0063 | 3454.4411 | 2394.4361 | 385.0 | 570.0 | 0.6754 | 376.0 | 0.6596 | 125.0 | 128.0 | 158.0 | 0.8101 | 0.7911 | 115.0 | 115.0 | 152.0 | 0.7566 | 0.7566 | 78.0 | 82.0 | 142.0 | 0.5775 | 0.5493 | 58.0 | 60.0 | 118.0 | 0.5085 | 0.4915 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 19.0 | 19 | 4.2113 | 0.0063 | 3463.0756 | 2400.4211 | 386.0 | 570.0 | 0.6772 | 378.0 | 0.6632 | 125.0 | 128.0 | 158.0 | 0.8101 | 0.7911 | 114.0 | 114.0 | 152.0 | 0.75 | 0.75 | 81.0 | 84.0 | 142.0 | 0.5915 | 0.5704 | 58.0 | 60.0 | 118.0 | 0.5085 | 0.4915 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 20.0 | 20 | 4.2743 | 0.0063 | 3514.9065 | 2436.3475 | 382.0 | 570.0 | 0.6702 | 374.0 | 0.6561 | 124.0 | 127.0 | 158.0 | 0.8038 | 0.7848 | 115.0 | 115.0 | 152.0 | 0.7566 | 0.7566 | 77.0 | 81.0 | 142.0 | 0.5704 | 0.5423 | 58.0 | 59.0 | 118.0 | 0.5 | 0.4915 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 21.0 | 21 | 4.2790 | 0.0063 | 3518.7559 | 2439.0157 | 378.0 | 570.0 | 0.6632 | 372.0 | 0.6526 | 124.0 | 127.0 | 158.0 | 0.8038 | 0.7848 | 114.0 | 114.0 | 152.0 | 0.75 | 0.75 | 79.0 | 81.0 | 142.0 | 0.5704 | 0.5563 | 55.0 | 56.0 | 118.0 | 0.4746 | 0.4661 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 22.0 | 22 | 4.3109 | 0.0063 | 3544.9824 | 2457.1946 | 381.0 | 570.0 | 0.6684 | 373.0 | 0.6544 | 125.0 | 128.0 | 158.0 | 0.8101 | 0.7911 | 114.0 | 114.0 | 152.0 | 0.75 | 0.75 | 78.0 | 81.0 | 142.0 | 0.5704 | 0.5493 | 56.0 | 58.0 | 118.0 | 0.4915 | 0.4746 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 23.0 | 23 | 4.3390 | 0.0063 | 3568.1153 | 2473.2290 | 380.0 | 570.0 | 0.6667 | 373.0 | 0.6544 | 124.0 | 127.0 | 158.0 | 0.8038 | 0.7848 | 114.0 | 114.0 | 152.0 | 0.75 | 0.75 | 79.0 | 82.0 | 142.0 | 0.5775 | 0.5563 | 56.0 | 57.0 | 118.0 | 0.4831 | 0.4746 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 24.0 | 24 | 4.3289 | 0.0063 | 3559.8267 | 2467.4839 | 380.0 | 570.0 | 0.6667 | 372.0 | 0.6526 | 124.0 | 127.0 | 158.0 | 0.8038 | 0.7848 | 115.0 | 115.0 | 152.0 | 0.7566 | 0.7566 | 77.0 | 81.0 | 142.0 | 0.5704 | 0.5423 | 56.0 | 57.0 | 118.0 | 0.4831 | 0.4746 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 25.0 | 25 | 4.3637 | 0.0063 | 3588.4182 | 2487.3020 | 379.0 | 570.0 | 0.6649 | 371.0 | 0.6509 | 124.0 | 127.0 | 158.0 | 0.8038 | 0.7848 | 114.0 | 114.0 | 152.0 | 0.75 | 0.75 | 78.0 | 82.0 | 142.0 | 0.5775 | 0.5493 | 55.0 | 56.0 | 118.0 | 0.4746 | 0.4661 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 26.0 | 26 | 4.3781 | 0.0063 | 3600.2845 | 2495.5270 | 378.0 | 570.0 | 0.6632 | 371.0 | 0.6509 | 124.0 | 127.0 | 158.0 | 0.8038 | 0.7848 | 113.0 | 113.0 | 152.0 | 0.7434 | 0.7434 | 78.0 | 81.0 | 142.0 | 0.5704 | 0.5493 | 56.0 | 57.0 | 118.0 | 0.4831 | 0.4746 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 27.0 | 27 | 4.3894 | 0.0063 | 3609.5398 | 2501.9423 | 377.0 | 570.0 | 0.6614 | 369.0 | 0.6474 | 124.0 | 127.0 | 158.0 | 0.8038 | 0.7848 | 113.0 | 113.0 | 152.0 | 0.7434 | 0.7434 | 77.0 | 81.0 | 142.0 | 0.5704 | 0.5423 | 55.0 | 56.0 | 118.0 | 0.4746 | 0.4661 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 28.0 | 28 | 4.3832 | 0.0063 | 3604.4250 | 2498.3971 | 378.0 | 570.0 | 0.6632 | 372.0 | 0.6526 | 124.0 | 126.0 | 158.0 | 0.7975 | 0.7848 | 113.0 | 113.0 | 152.0 | 0.7434 | 0.7434 | 79.0 | 81.0 | 142.0 | 0.5704 | 0.5563 | 56.0 | 58.0 | 118.0 | 0.4915 | 0.4746 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 29.0 | 29 | 4.3985 | 0.0063 | 3617.0423 | 2507.1427 | 379.0 | 570.0 | 0.6649 | 370.0 | 0.6491 | 124.0 | 127.0 | 158.0 | 0.8038 | 0.7848 | 114.0 | 114.0 | 152.0 | 0.75 | 0.75 | 77.0 | 81.0 | 142.0 | 0.5704 | 0.5423 | 55.0 | 57.0 | 118.0 | 0.4831 | 0.4661 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 30.0 | 30 | 4.4130 | 0.0063 | 3628.9597 | 2515.4032 | 378.0 | 570.0 | 0.6632 | 369.0 | 0.6474 | 125.0 | 128.0 | 158.0 | 0.8101 | 0.7911 | 112.0 | 112.0 | 152.0 | 0.7368 | 0.7368 | 77.0 | 81.0 | 142.0 | 0.5704 | 0.5423 | 55.0 | 57.0 | 118.0 | 0.4831 | 0.4661 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 31.0 | 31 | 4.3846 | 0.0063 | 3605.6251 | 2499.2288 | 377.0 | 570.0 | 0.6614 | 370.0 | 0.6491 | 124.0 | 127.0 | 158.0 | 0.8038 | 0.7848 | 113.0 | 113.0 | 152.0 | 0.7434 | 0.7434 | 78.0 | 81.0 | 142.0 | 0.5704 | 0.5493 | 55.0 | 56.0 | 118.0 | 0.4746 | 0.4661 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 32.0 | 32 | 4.4328 | 0.0063 | 3645.2685 | 2526.7076 | 378.0 | 570.0 | 0.6632 | 371.0 | 0.6509 | 124.0 | 127.0 | 158.0 | 0.8038 | 0.7848 | 113.0 | 113.0 | 152.0 | 0.7434 | 0.7434 | 78.0 | 81.0 | 142.0 | 0.5704 | 0.5493 | 56.0 | 57.0 | 118.0 | 0.4831 | 0.4746 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 33.0 | 33 | 4.4063 | 0.0063 | 3623.4620 | 2511.5925 | 379.0 | 570.0 | 0.6649 | 371.0 | 0.6509 | 124.0 | 127.0 | 158.0 | 0.8038 | 0.7848 | 114.0 | 114.0 | 152.0 | 0.75 | 0.75 | 78.0 | 82.0 | 142.0 | 0.5775 | 0.5493 | 55.0 | 56.0 | 118.0 | 0.4746 | 0.4661 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 34.0 | 34 | 4.4189 | 0.0063 | 3633.7879 | 2518.7499 | 378.0 | 570.0 | 0.6632 | 371.0 | 0.6509 | 124.0 | 127.0 | 158.0 | 0.8038 | 0.7848 | 114.0 | 114.0 | 152.0 | 0.75 | 0.75 | 78.0 | 81.0 | 142.0 | 0.5704 | 0.5493 | 55.0 | 56.0 | 118.0 | 0.4746 | 0.4661 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 35.0 | 35 | 4.4016 | 0.0063 | 3619.6355 | 2508.9402 | 379.0 | 570.0 | 0.6649 | 371.0 | 0.6509 | 124.0 | 127.0 | 158.0 | 0.8038 | 0.7848 | 113.0 | 113.0 | 152.0 | 0.7434 | 0.7434 | 78.0 | 82.0 | 142.0 | 0.5775 | 0.5493 | 56.0 | 57.0 | 118.0 | 0.4831 | 0.4746 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 36.0 | 36 | 4.4070 | 0.0063 | 3624.0410 | 2511.9938 | 378.0 | 570.0 | 0.6632 | 370.0 | 0.6491 | 124.0 | 127.0 | 158.0 | 0.8038 | 0.7848 | 113.0 | 113.0 | 152.0 | 0.7434 | 0.7434 | 78.0 | 81.0 | 142.0 | 0.5704 | 0.5493 | 55.0 | 57.0 | 118.0 | 0.4831 | 0.4661 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 37.0 | 37 | 4.4347 | 0.0063 | 3646.8202 | 2527.7831 | 375.0 | 570.0 | 0.6579 | 368.0 | 0.6456 | 123.0 | 126.0 | 158.0 | 0.7975 | 0.7785 | 113.0 | 113.0 | 152.0 | 0.7434 | 0.7434 | 77.0 | 80.0 | 142.0 | 0.5634 | 0.5423 | 55.0 | 56.0 | 118.0 | 0.4746 | 0.4661 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| 0.0 | 38.0 | 38 | 4.4391 | 0.0063 | 3650.3984 | 2530.2633 | 375.0 | 570.0 | 0.6579 | 368.0 | 0.6456 | 124.0 | 127.0 | 158.0 | 0.8038 | 0.7848 | 112.0 | 112.0 | 152.0 | 0.7368 | 0.7368 | 77.0 | 80.0 | 142.0 | 0.5634 | 0.5423 | 55.0 | 56.0 | 118.0 | 0.4746 | 0.4661 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
### Framework versions
- Transformers 4.51.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
samwu0217/diffusion_toy_2_2
|
samwu0217
| 2025-08-19T06:06:34Z
| 0
| 0
|
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"diffusion",
"dataset:samwu0217/toy_2",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-08-19T06:04:54Z
|
---
datasets: samwu0217/toy_2
library_name: lerobot
license: apache-2.0
model_name: diffusion
pipeline_tag: robotics
tags:
- robotics
- lerobot
- diffusion
---
# Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
KCS97/clock
|
KCS97
| 2025-08-19T06:06:19Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:finetune:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-08-19T05:56:38Z
|
---
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
instance_prompt: a photo of sks clock
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - KCS97/clock
This is a dreambooth model derived from stable-diffusion-v1-5/stable-diffusion-v1-5. The weights were trained on a photo of sks clock using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
GradientNetwork/Qwen2.5-7B-ECHO-MATH-GRPO
|
GradientNetwork
| 2025-08-19T06:04:48Z
| 0
| 0
| null |
[
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2025-08-18T12:29:51Z
|
---
license: apache-2.0
---
|
VoilaRaj/78_dpG7CL
|
VoilaRaj
| 2025-08-19T06:03:01Z
| 0
| 0
| null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-19T05:59:04Z
|
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
VanWu1983/model_W20250817
|
VanWu1983
| 2025-08-19T06:02:15Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-19T05:56:17Z
|
---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** VanWu1983
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
katanyasekolah/blockassist-bc-silky_sprightly_cassowary_1755581629
|
katanyasekolah
| 2025-08-19T06:01:59Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silky sprightly cassowary",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:01:56Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silky sprightly cassowary
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
lqpl/blockassist-bc-hairy_insectivorous_antelope_1755583071
|
lqpl
| 2025-08-19T06:01:28Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"hairy insectivorous antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T05:59:25Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- hairy insectivorous antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hakimjustbao/blockassist-bc-raging_subtle_wasp_1755581680
|
hakimjustbao
| 2025-08-19T06:01:26Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"raging subtle wasp",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:01:23Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- raging subtle wasp
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755583183
|
IvanJAjebu
| 2025-08-19T06:01:25Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:01:02Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755583222
|
0xaoyama
| 2025-08-19T06:01:00Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T06:00:49Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Jesteban247/medgemma_braincancer
|
Jesteban247
| 2025-08-19T06:00:08Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:unsloth/medgemma-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/medgemma-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-08-19T05:58:52Z
|
---
base_model: unsloth/medgemma-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** Jesteban247
- **License:** apache-2.0
- **Finetuned from model :** unsloth/medgemma-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
yaelahnal/blockassist-bc-mute_clawed_crab_1755583110
|
yaelahnal
| 2025-08-19T05:59:39Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mute clawed crab",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T05:59:21Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mute clawed crab
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
sampingkaca72/blockassist-bc-armored_stealthy_elephant_1755581603
|
sampingkaca72
| 2025-08-19T05:59:01Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"armored stealthy elephant",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T05:58:58Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- armored stealthy elephant
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
vwzyrraz7l/blockassist-bc-tall_hunting_vulture_1755581346
|
vwzyrraz7l
| 2025-08-19T05:56:20Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tall hunting vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T05:56:17Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tall hunting vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
gmonsoon/Qwen3-4b-REnewbie-NEXT-Q4_K_M-GGUF
|
gmonsoon
| 2025-08-19T05:56:15Z
| 0
| 0
|
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:gmonsoon/Qwen3-4b-REnewbie-NEXT",
"base_model:quantized:gmonsoon/Qwen3-4b-REnewbie-NEXT",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T05:56:02Z
|
---
base_model: gmonsoon/Qwen3-4b-REnewbie-NEXT
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# gmonsoon/Qwen3-4b-REnewbie-NEXT-Q4_K_M-GGUF
This model was converted to GGUF format from [`gmonsoon/Qwen3-4b-REnewbie-NEXT`](https://huggingface.co/gmonsoon/Qwen3-4b-REnewbie-NEXT) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/gmonsoon/Qwen3-4b-REnewbie-NEXT) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo gmonsoon/Qwen3-4b-REnewbie-NEXT-Q4_K_M-GGUF --hf-file qwen3-4b-renewbie-next-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo gmonsoon/Qwen3-4b-REnewbie-NEXT-Q4_K_M-GGUF --hf-file qwen3-4b-renewbie-next-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo gmonsoon/Qwen3-4b-REnewbie-NEXT-Q4_K_M-GGUF --hf-file qwen3-4b-renewbie-next-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo gmonsoon/Qwen3-4b-REnewbie-NEXT-Q4_K_M-GGUF --hf-file qwen3-4b-renewbie-next-q4_k_m.gguf -c 2048
```
|
Kurosawama/Llama-3.1-8B-Instruct-Retranslation-align
|
Kurosawama
| 2025-08-19T05:55:58Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"trl",
"dpo",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-19T05:55:55Z
|
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HoangVuSnape/halong_embedding_pr_v2_ep30
|
HoangVuSnape
| 2025-08-19T05:55:34Z
| 0
| 0
|
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:1472",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_model:hiieu/halong_embedding",
"base_model:finetune:hiieu/halong_embedding",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-08-19T05:55:22Z
|
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:1472
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: hiieu/halong_embedding
widget:
- source_sentence: Những điểm đặc biệt của chương trình học là gì?
sentences:
- 'Các phòng thí nghiệm này giúp sinh viên thực hành và nghiên cứu các phản ứng
hoá học, phân tích chất lượng sản phẩm và môi trường. CÁC ĐIỂM ĐẶC BIỆT
Chương trình học thực tiễn: Sinh viên có cơ hội tham gia các nghiên cứu thực tế
tại các phòng thí nghiệm của trường và các công ty, giúp họ phát triển các kỹ
năng thực hành và nghiên cứu hoá học. Môi trường học tập quốc tế: Sinh viên có
cơ hội tham gia các chương trình trao đổi sinh viên và hợp tác nghiên cứu với
các đối tác quốc tế trong lĩnh vực hoá học. Học bổng và cơ hội du học: Các chương
trình học bổng và cơ hội du học bậc thạc sĩ, tiến sĩ tại các trường đại học danh
tiếng trên thế giới. TRIỂN VỌNG NGHỀ NGHIỆP & CƠ HỘI VIỆC LÀM
Sinh viên tốt nghiệp ngành Hoá học có thể làm việc trong các lĩnh vực như:
Công nghiệp hoá chất và dược phẩm: Làm việc tại các công ty sản xuất hoá chất,
dược phẩm, sản xuất vật liệu và sản phẩm hoá học khác. Ngành thực phẩm và bảo
vệ môi trường: Nghiên cứu và phát triển các sản phẩm thực phẩm, phân tích chất
lượng thực phẩm, và xử lý chất thải hoá học trong công nghiệp.'
- 'Trường Đại học Ngoại Thương Cơ sở II
Tiếng Anh: Foreign Trade University Ho Chi Minh City Campus (FTU2) Trường Đại
học Ngoại Thương cơ sở II là cơ sở đào tạo phía Nam của Trường Đại học Ngoại thương
tại Hà Nội, đại học chuyên ngành kinh tế đầu ngành tại Việt Nam và thành viên
của Bộ Giáo dục và Đào tạo. Cơ sở này được thành lập dựa trên nhu cầu đào tạo
cán bộ trong lĩnh vực kinh tế và kinh doanh quốc tế tại các tỉnh thành phía Nam
trong giai đoạn hội nhập kinh tế quốc tế. Cơ sở được thành lập theo Quyết định
số 1485/GD-ĐT ngày 16/07/1993 của Bộ trưởng Bộ Giáo dục và Đào tạo Việt Nam. Tên
trường: Trường Đại học Ngoại thương (Cơ sở 2)
Tên tiếng Anh: Foreign Trade University (FTU)
Mã trường: NTS
Trực thuộc: Bộ Giáo dục và Đào tạo
Loại trường: Công lập
Loại hình đào tạo: Đại học – Sau đại học
Lĩnh vực: Kinh tế
Địa chỉ: Số 15 Đường D5, Khu Văn Thánh Bắc, Phường 25, Quận Bình Thạnh, TP Hồ
Chí Minh
Điện thoại:
Email:
Website: http://cs2.ftu.edu.vn/
Fanpage: https://www.facebook.com/ftu2hcmc/
Lịch sử
1962
Ngày 20/06/1962, theo Quyết định của Thủ tướng Chính phủ, Khoa Quan hệ Quốc tế
tách khỏi Trường Đại học Kinh tế - Tài chính để thành lập Trường Cán bộ Ngoại
giao - Ngoại thương trực thuộc Bộ Ngoại giao. Trụ sở ban đầu được đặt tại làng
Láng, tỉnh Hà Đông (nay là phường Láng Thượng, Hà Nội). 1967
Ngày 05/08/1967, theo đề nghị của Bộ Ngoại giao và Bộ Ngoại thương, Thủ tướng
Phạm Văn Đồng đã ký Quyết định số 123/CP, chia tách Trường Cán bộ Ngoại giao -
Ngoại thương thành hai trường:
Trường Ngoại giao (nay là Học viện Ngoại giao) trực thuộc Bộ Ngoại giao. Trường
Ngoại thương thuộc Bộ Ngoại thương (nay là Bộ Công Thương). 1985
Trường Đại học Ngoại thương chuyển từ Bộ Ngoại thương sang trực thuộc Bộ Đại học
và Trung học Chuyên nghiệp (nay là Bộ Giáo dục và Đào tạo). 1993
Ngày 16/07/1993, xuất phát từ nhu cầu đào tạo cán bộ kinh tế và kinh doanh quốc
tế tại Thành phố Hồ Chí Minh và các tỉnh thành phía Nam, Cơ sở II Trường Đại học
Ngoại thương tại TP.'
- 'Điểm xét tuyển được làm tròn đến 02 chữ số thập phân. - Điểm xét tuyển được xác
định như sau (làm tròn đến 02 chữ số thập phân): Điểm xét tuyển = [(ĐM1*HS môn
1+ ĐM2*HS môn 2 + ĐM3 * HS môn 3)*3]/(Tổng hệ số) + Điểm ưu tiên Khu vực + Điểm
ưu tiên đối tượng. (*) Điểm trúng tuyển ngành Luật, Luật kinh tế: tổ hợp Văn,
Sử, Địa cao hơn 1.5 điểm. (1) Ngành ngôn ngữ Anh, ngôn ngữ Trung Quốc, ngôn ngữ
Nhật, ngôn ngữ Hàn Quốc: Ngoại ngữ nhân hệ số 2. (2) Các ngành Khoa học máy tính,
Khoa học máy tính Chất lượng cao, Công nghệ thông tin, CTKT công trình xây dựng,
CNKT công trình xây dựng Chất lượng cao, Quản lý xây dựng: Toán nhân hệ số 2.
(3) Các ngành Chất lượng cao: Luật kinh tế, Ngôn ngữ Anh, Ngôn ngữ Trung Quốc,
Quản trị kinh doanh, Tài chính ngân hàng, Kế toán: Ngoại ngữ hệ số 2. VII.Điểm
chuẩn Trường ĐH Mở TP.HCM năm 2021 dựa vào kết quả học tập THPT(học bạ)
i.'
- source_sentence: Nguyên tắc xét tuyển của Trường được áp dụng như thế nào khi thí
sinh đăng ký nhiều nguyện vọng hoặc nhiều phương thức xét tuyển?
sentences:
- '4. Đối với phương thức kết hợp thi tuyển và xét tuyển
4.1. Thí sinh dự xét tuyển ngành Giáo dục Mầm non trình độ đại học
Phải tham gia kỳ thi năng khiếu do Trường Đại học Sư phạm Thành phố Hồ Chí Minh
tổ chức và có kết quả đạt từ 5,0 điểm trở lên;
Đối với thí sinh xét tuyển sử dụng kết quả thi tốt nghiệp THPT năm 2024: ngưỡng
điểm đảm bảo chất lượng đầu vào, điều kiện nhận hồ sơ đăng ký xét tuyển được thông
báo chính thức sau khi Bộ Giáo dục và Đào tạo xác định ngưỡng đảm bảo chất lượng
đầu vào đại học (căn cứ kết quả kỳ thi tốt nghiệp THPT năm 2024). Đối với thí
sinh xét tuyển sử dụng kết quả học tập THPT: chỉ áp dụng đối với thí sinh tốt
nghiệp THPT năm 2024 đồng thời phải thỏa một trong hai điều kiện sau:
+ Có học lực lớp 12 xếp loại giỏi;
+ Có điểm xét tốt nghiệp THPT từ 8,0 trở lên. 4.2. Thí sinh dự xét tuyển ngành
Giáo dục Mầm non trình độ cao đẳng
Phải tham gia kỳ thi năng khiếu do Trường Đại học Sư phạm Thành phố Hồ Chí Minh
tổ chức và có kết quả đạt từ 5,0 điểm trở lên;
Đối với thí sinh xét tuyển sử dụng kết quả thi tốt nghiệp THPT năm 2024: ngưỡng
điểm đảm bảo chất lượng đầu vào, điều kiện nhận hồ sơ đăng ký xét tuyển được thông
báo chính thức sau khi Bộ Giáo dục và Đào tạo xác định ngưỡng đảm bảo chất lượng
đầu vào đại học (căn cứ kết quả kỳ thi tốt nghiệp THPT năm 2024). Đối với thí
sinh xét tuyển sử dụng kết quả học tập THPT: chỉ áp dụng đối với thí sinh tốt
nghiệp THPT năm 2024 đồng thời phải thỏa một trong hai điều kiện sau:
+ Có học lực lớp 12 xếp loại khá;
+ Có điểm xét tốt nghiệp THPT từ 6,5 trở lên. 4.3. Thí sinh dự xét tuyển ngành
Giáo dục Thể chất
Phải tham gia kỳ thi năng khiếu do Trường Đại học Sư phạm Thành phố Hồ Chí Minh
tổ chức và có kết quả đạt từ 5,0 điểm trở lên;
Đối với thí sinh xét tuyển sử dụng điểm thi tốt nghiệp THPT năm 2024: ngưỡng điểm
đảm bảo chất lượng đầu vào, điều kiện nhận hồ sơ đăng ký xét tuyển được thông
báo chính thức sau khi Bộ Giáo dục và Đào tạo xác định ngưỡng đảm bảo chất lượng
đầu vào đại học (căn cứ kết quả kỳ thi tốt nghiệp THPT năm 2024);
Đối với thí sinh xét tuyển sử dụng kết quả học tập THPT: chỉ áp dụng đối với thí
sinh tốt nghiệp THPT năm 2024 đồng thời thỏa thêm một trong các điều kiện sau:
+ Có học lực lớp 12 xếp loại khá trở lên;
+ Có điểm xét tốt nghiệp THPT từ 6,5 trở lên;
+ Là vận động viên cấp 1, kiện tướng, vận động viên đã từng đoạt huy chương tại
Hội khỏe Phù Đổng, các giải trẻ quốc gia và quốc tế hoặc giải vô địch quốc gia
và quốc tế có điểm thi năng khiếu do trường tổ chức đạt loại xuất sắc (từ 9,0
trở lên theo thang điểm 10,0).'
- 'Danh mục các ngành điều kiện nộp hồ sơ xét tuyển (xem tại đây). Quy định chứng
chỉ tiếng Anh quốc tế tương đương (xem tại đây)
6. Xét tuyển thẳng, ưu tiên xét tuyển theo Quy chế của Bộ GD&ĐT – Mã phương thức
301: Thực hiện theo quy định của Bộ GD&ĐT
7. Các lưu ý khi đăng ký NVXT và nguyên tắc xét tuyển trên hệ thống Bộ
a. Các lưu ý khi đăng ký NVXT
Thí sinh nên tra cứu thông tin các nguyện vọng đăng ký xét tuyển vào TDTU theo
phương thức riêng tại: https://tracuuxettuyen.tdtu.edu.vn trước khi đăng ký nguyện
vọng lên hệ thống của Bộ GD&ĐT. Số CMND/CCCD thí sinh đã đăng ký xét tuyển trên
hệ thống của TDTU; đăng ký phương thức 4 trên hệ thống của Đại học Quốc gia TP.HCM
phải trùng khớp với số CMND/CCCD sử dụng đăng ký tài khoản trên hệ thống của Bộ
GD&ĐT. Trường hợp thí sinh đã đăng ký số CMND/CCCD không trùng khớp nhau giữa
các hệ thống trên, thí sinh phải liên hệ với TDTU để được hỗ trợ cập nhật lại
số CMND/CCCD cho trùng khớp với hệ thống của Bộ trước khi đăng ký nguyện vọng.
Thí sinh sẽ không đủ điều kiện xét tuyển nếu không sử dụng cùng 1 số CMND/CCCD
đăng ký giữa các hệ thống trên. Thí sinh xét tuyển vào chương trình đại học bằng
tiếng Anh, chương trình liên kết quốc tế nhưng không nộp chứng chỉ tiếng Anh theo
quy định, không dự thi năng lực tiếng Anh hoặc dự thi năng lực tiếng Anh kết quả
không đạt nếu đủ điểm trúng tuyển sẽ trúng tuyển vào chương trình dự bị tiếng
Anh. Khi thí sinh làm thủ tục nhập học, Nhà trường sẽ tổ chức cho thí sinh thi
đánh giá năng lực tiếng Anh. Nếu kết quả thi đánh giá năng lực của thí sinh đạt
trình độ tiếng Anh theo yêu cầu của chương trình (B1 đối với chương trình đại
học bằng tiếng Anh, B2 đối với chương trình liên kết đào tạo quốc tế) sẽ được
nhập học vào chương trình chính thức. Trường hợp chưa đạt năng lực tiếng Anh đầu
vào, thí sinh sẽ học chương trình dự bị tiếng Anh. b. Nguyên tắc xét tuyển
Nếu một NVXT của thí sinh đăng ký vào Trường có chọn nhiều căn cứ xét tuyển và
tương ứng có nhiều phương thức xét tuyển (Phương thức 1, 2, 3, 4) thì Trường sẽ
thực hiện việc xét tuyển theo thứ tự ưu tiên lần lượt của các phương thức như
sau: Phương thức 1, Phương thức 3, Phương thức 4, Phương thức 2. Thí sinh có nhiều
NVXT đủ điều kiện trúng tuyển thì chỉ được công nhận trúng tuyển và gọi nhập học
theo nguyện vọng cao nhất.'
- 'Thí sinh có thể dự thi cả 2 đợt thi năng khiếu để dùng điểm cao nhất của 2 đợt
thi xét tuyển (đợt thi 1 dự kiến ngày 15-17/08/2021; đợt thi 2 dự kiến ngày 17-20/8/2021).
TDTU không nhận điểm thi năng khiếu của các Trường khác chuyển sang. Xem chi tiết
thông báo thi năng khiếu tại https://admission.tdtu.edu.vn
+ Thí sinh thuộc đối tượng 2- đợt 2 xét tuyển vào chương trình đại học bằng tiếng
Anh phải có Chứng chỉ tiếng Anh quốc tế tương đương IELTS 5.0 trở lên (còn thời
hạn trong vòng 2 năm tính đến ngày 01/10/2021); Thí sinh không có chứng chỉ tiếng
Anh quốc tế tương đương IELTS 5.0 trở lên còn thời hạn theo quy định của TDTU
phải đăng ký dự thi Năng lực tiếng Anh do TDTU tổ chức (trừ ngành Ngôn ngữ Anh
chỉ nhận chứng chỉ tiếng Anh quốc tế theo quy định) tại website: https://thinangkhieu.tdtu.edu.vn.'
- source_sentence: Những đối tượng nào có thể đăng ký xét tuyển vào Đại học Sư phạm
Kỹ thuật TP.HCM và cần đáp ứng các điều kiện gì?
sentences:
- 'Hồ Chí Minh được thành lập theo Quyết định số 1485/GD-ĐT. Cơ sở vật chất
Địa chỉ: Số 15, Đường D5, Phường 25, Quận Bình Thạnh, TP. Hồ Chí Minh. Ban đầu,
do chưa có cơ sở vật chất riêng, Cơ sở II phải thuê cơ sở của Trường Cao đẳng
Kinh tế Đối ngoại. Qua thời gian, trường đã xây dựng được cơ sở mới đáp ứng nhu
cầu giảng dạy và học tập. Diện tích khuôn viên: Gần 5.000 m². Khu vực giảng dạy
chính: Sảnh A và sảnh B, đồng thời là nơi đặt trụ sở Ban Giám hiệu và các khoa,
phòng ban quản lý. Trang thiết bị: Nhiều phòng học và phòng chức năng được trang
bị hiện đại. Ngoài ra, trong khuôn viên còn có phân viện VJCC cơ sở TP. Hồ Chí
Minh, được hỗ trợ xây dựng bởi nguồn vốn từ Chính phủ Nhật Bản, tương tự như phân
viện tại Hà Nội. Cơ cấu tổ chức và đội ngũ cán bộ, giáo viên
Trong thời gian đầu mới thành lập, Cơ sở II chỉ có 02 cán bộ, và hầu hết các hoạt
động được chỉ đạo trực tiếp từ Cơ sở I tại Hà Nội. Tuy nhiên, với quy mô đào tạo
ngày càng tăng, Cơ sở II đã nhanh chóng củng cố cơ cấu tổ chức và đội ngũ cán
bộ, giáo viên. Hiện tại, Cơ sở II có hơn 100 cán bộ, giáo viên cơ hữu, công tác
tại 11 Ban và 05 Bộ môn. Các Ban
Ban Tổ chức - Hành chính
Ban Kế hoạch - Tài chính
Ban Quản lý đào tạo
Ban Công tác chính trị & Sinh viên
Ban Đào tạo quốc tế
Ban Quản trị thiết bị
Ban Quản lý Khoa học & Hợp tác quốc tế
Ban Khảo thí & Đảm bảo chất lượng
Ban Truyền thông & Quan hệ đối ngoại
Ban Thư viện
Ban Công tác Đảng & Đoàn thể
Các Bộ môn
Bộ môn Khoa học cơ bản
Bộ môn Kinh doanh & Thương mại quốc tế
Bộ môn Ngoại ngữ
Bộ môn Kinh tế - Luật
Bộ môn Quản trị kinh doanh & Tài chính - Kế toán'
- 'THÔNG TIN TUYỂN SINH Đại học Sư phạm Kỹ thuật TP.HCM
. Thông tin chung
1. Thời gian xét tuyển
Theo lịch tuyển sinh chung của Bộ GD&ĐT và kế hoạch tuyển sinh của trường công
bố cụ thể trên website. 2. Đối tượng tuyển sinh
Thí sinh đã tốt nghiệp THPT. 3. Phạm vi tuyển sinh
Tuyển sinh trong cả nước. 4. Phương thức tuyển sinh
4.1. Phương thức xét tuyển
Phương thức 1: Xét tuyển học bạ THPT. Phương thức 2: Xét tuyển thí sinh theo kết
quả điểm thi tốt nghiệp THPT năm 2024 theo các tổ hợp môn xét tuyển từng ngành
học. Phương thức 3: Xét tuyển thẳng, ưu tiên xét tuyển thẳng. 4.2. Ngưỡng đảm
bảo chất lượng đầu vào, điều kiện nhận ĐKXT
Phương thức xét tuyển bằng điểm thi THPT 2024: thí sinh phải tốt nghiệp THPT và
thỏa điều kiện ngưỡng đảm bảo chất lượng đầu vào của Trường. Thông báo ngưỡng
đảm bảo sau khi thí sinh có kết quả thi THPT. Phương thức xét tuyển bằng học bạ
THPT tốt nghiệp (tốt nghiệp THPT 2024): thí sinh tốt nghiệp THPT và điểm trung
bình học bạ mỗi môn học theo tổ hợp đăng ký xét tuyển từ 5,0 trở lên. Hồi đồng
thi tuyển uy quyền cho những thành viên thường trực Hội đồng tuyển sinh quyết
định điểm trúng tuyển các phương thức xét. Điềm chuẩn ngành Sư phạm tiếng Anh
theo các phương thức xét tuyển sớm sẽ được điều chỉnh khi có chỉ tiêu được giao
của Bộ GD&ĐT. 4.3.'
- '4. CÁC NGÀNH ĐÀO TẠO
a. ĐẠI HỌC
Cử nhân Sư phạm Tin học
Cử nhân Công nghệ Thông tin
b. SAU ĐẠI HỌC
Thạc sĩ Khoa học máy tính
vii. Khoa Vật lý
1. CHẤT LƯỢNG ĐÀO TẠO
ĐÀO TẠO CỬ NHÂN (4 NĂM)
CN Sư phạm Vật lý, CN Vật lý học
CN Sư phạm Công nghệ
TUYỂN SINH: 100 - 150 SV
ĐÀO TẠO CAO HỌC (2 NĂM)
Bắt đầu đào tạo Thạc sĩ từ 1999
ThS Lý luận và phương pháp dạy học bộ môn Vật lý
ThS Vật Lý Nguyên tử và hạt nhân
TUYỂN SINH: 15 - 25 HV/năm
2. CHẤT LƯỢNG GIẢNG VIÊN
ĐỘI NGŨ GIẢNG VIÊN: 35
Giảng viên: 35
Giáo sư : 1
Phó Giáo sư Tiến sĩ: 4
Tiến sĩ: 17
Thạc sĩ: 10
Cử nhân: 3
3. MỤC TIÊU ĐÀO TẠO
Đào tạo cử nhân Vật lý học, có phẩm chất chính trị, đạo đức và sức khỏe tốt, hiểu
và vận dụng các tri thức cơ bản của Vật lý học theo định hướng chuyên ngành. Sau
khi tốt nghiệp, người học có đủ năng lực để làm việc trong môi trường nghiên cứu,
sản xuất kinh doanh có sử dụng kiến thức Vật lý học cũng như có thể tiếp tục theo
các bậc học cao hơn. Đào tạo giáo viên có trình độ cử nhân Sư phạm Vật lý (hệ
chính quy, chính quy địa phương, hệ chuyên tu, tại chức). Sau khi tốt nghiệp,
người học có phẩm chất chính trị, đạo đức và sức khỏe tốt, hiểu và vận dụng các
tri thức cơ bản của Vật lý học, lý luận và phương pháp giảng dạy Vật lý ở trường
trung học. Đào tạo giáo viên dạy Công nghệ bậc Trung học cơ sở và Trung học phổ
thông. Sau khi tốt nghiệp, người học có phẩm chất chính trị, đạo đức và sức khỏe
tốt, hiểu và vận dụng các tri thức khoa học, công nghệ nền tảng vào trong dạy
học môn Công nghệ ở trường phổ thông. Sau khi tốt nghiệp, người học có đủ năng
lực để làm việc trong môi trường nghiên cứu, sản xuất kinh doanh có sử dụng kiến
thức khoa học, công nghệ cũng như có thể tiếp tục theo các bậc học cao hơn.'
- source_sentence: Quá trình hình thành và phát triển của Đại học Kinh tế Thành phố
Hồ Chí Minh diễn ra như thế nào?
sentences:
- '1. Điểm trúng tuyển
Phương thức xét tuyển theo kết quả học tập THPT – Đợt 2 (PT1-Đ2), ưu tiên xét
tuyển theo quy định của TDTU dành cho học sinh trường chuyên trên cả nước và một
số trường trọng điểm ở TP.HCM – Đợt 2 (PT3-ĐT1-Đ2); ưu tiên xét tuyển theo quy
định của TDTU dành cho học sinh có chứng chỉ tiếng Anh quốc tế tương đương IELTS
5.0 trở lên – Đợt 2 (PT3-ĐT2-Đ2): Điểm xét tuyển được thực hiện theo đúng đề án
tuyển sinh đại học năm 2022, thang điểm 40 và được làm tròn đến 02 chữ số thập
phân (đã bao gồm điểm ưu tiên khu vực, đối tượng, hệ số trường THPT, điểm ưu tiên
thành tích học sinh giỏi). Phương thức xét tuyển theo điểm thi THPT năm 2022 (PT2):
Điểm xét tuyển được thực hiện theo đúng đề án tuyển sinh đại học năm 2022, là
tổng điểm của 3 môn theo tổ hợp (có nhân hệ số môn theo tổ hợp, ngành xét
tuyển theo thang điểm 40), cộng với điểm ưu tiên khu vực, đối tượng theo thang
điểm 40 (nếu có), được làm tròn đến 2 chữ số thập phân theo quy định của Bộ GD&ĐT.
Phương thức xét tuyển theo điểm thi đánh giá năng lực của Đại học Quốc gia TP.HCM
năm 2022 (PT5): Điểm xét tuyển được thực hiện theo đúng đề án tuyển sinh đại học
năm 2022 theo thang điểm 1200 (đã bao gồm điểm ưu tiên khu vực, đối tượng theo
thang điểm 1200)
Phương thức xét tuyển theo kết quả học tập THPT -Đợt 1 (PT1-Đ1) và ưu tiên xét
tuyển theo quy định của TDTU đợt 1 (PT3-Đ1), điểm trúng tuyển theo thông báo Kết
quả sơ tuyển PT1, PT3-ĐT1 các ngành trình độ đại học chính quy 2022-Đợt 1 ngày
30/6/2022 của HĐTS Trường. Bảng điểm trúng tuyển theo các phương thức như sau:
Here''s the updated table based on your additional data. I''ve kept the structure
consistent, with the text "HHMT≥6.0" moved to the "Điểm TT PT5" column where relevant:
STT Mã ngành Tên ngành Điểm TT PT1-Đ2 Điểm TT PT2 Điểm TT PT3-ĐT1-Đ2 Điểm TT PT3-ĐT2-Đ2
Điểm TT PT5 Chương trình tiêu chuẩn 1 7210402 Thiết kế công nghiệp 26.5 23 30
650 HHMT≥6.0 2 7210403 Thiết kế đồ họa 29.5 27 32 700 HHMT≥6.0 3 7210404 Thiết
kế thời trang 26.5 24 30 650 HHMT≥6.0 4 7220201 Ngôn ngữ Anh 37 34 36 800 5 7220204
Ngôn ngữ Trung Quốc 37 33 35 800 6 7310301 Xã hội học 31.5 28.5 31 650 7 7310630
Việt Nam học (Chuyên ngành: Du lịch và lữ hành) 34 31.8 33 700 8 7310630Q Việt
Nam học (Chuyên ngành: Du lịch và quản lý du lịch) 34 31.8 33 700 9 7340101 Quản
trị kinh doanh (Chuyên ngành: Quản trị nguồn nhân lực) 37 33.6 36 800 10 7340101N
Quản trị kinh doanh (Chuyên ngành: Quản trị nhà hàng - khách sạn) 35.75 30.5 35
800 11 7340115 Marketing 37.75 34.8 37 870 12 7340120 Kinh doanh quốc tế 37.5
34.5 37 870 13 7340201 Tài chính - Ngân hàng 36.75 33.6 35.25 750 14 7340301 Kế
toán 36 33.3 34.25 720 15 7340408 Quan hệ lao động (Chuyên ngành Quản lý Quan
hệ lao động, Chuyên ngành Hành vi tổ chức) 28 27 31 700 16 7380101 Luật 36.5 33.5
35.5 720 17 7420201 Công nghệ sinh học 33.5 26.5 32 680 18 7440301 Khoa học môi
trường 26 22 31 650 19 7460112 Toán ứng dụng 31.5 31.1 31 680 20 7460201 Thống
kê 28 29.1 31 680 21 7480101 Khoa học máy tính 38 35 35 850 22 7480102 Mạng máy
tính và truyền thông dữ liệu 36.25 34.5 32.5 800 23 7480103 Kỹ thuật phần mềm
38 35.4 35.5 850 24 7510406 Công nghệ kỹ thuật môi trường (Chuyên ngành Cấp thoát
nước và môi trường nước) 26 22 30 650 25 7520114 Kỹ thuật cơ điện tử 33 28.5 32
680 26 7520201 Kỹ thuật điện 31 27.5 32 650 27 7520207 Kỹ thuật điện tử - viễn
thông 31 29.5 32 650 28 7520216 Kỹ thuật điều khiển và tự động hóa 33 31.7 32
680 29 7520301 Kỹ thuật hóa học 34 28.5 32 680 30 7580101 Kiến trúc 28 26 32 680
HHMT≥6.0 31 7580105 Quy hoạch vùng và đô thị 27 23 30 650 32 7580108 Thiết kế
nội thất 27 24 32 650 HHMT≥6.0 33 7580201 Kỹ thuật xây dựng 29 25 32 650 34 7580205
Kỹ thuật xây dựng công trình giao thông 27 23 30 650 35 7720201 Dược học 36 HSG
lớp 12 33.2 HSG lớp 12 800 HSG lớp 12 36 7760101 Công tác xã hội 27 25.3 30 650
37 7810301 Quản lý thể dục thể thao (Chuyên ngành kinh doanh thể thao và tổ chức
sự kiện) 31.5 27 30 650 38 7810302 Golf 27 23 30 650 39 7850201 Bảo hộ lao động
27 23 30 650 CHƯƠNG TRÌNH CHẤT LƯỢNG CAO 1 F7210403 Thiết kế đồ họa - Chương
trình Chất lượng cao 26.5 23 30 650 HHMT≥6.0 2 F7220201 Ngôn ngữ Anh – Chương
trình Chất lượng cao 34 29.9 32 700 3 F7310630Q Việt Nam học (Chuyên ngành Du
lịch và Quản lý du lịch) - Chương trình Chất lượng cao 27 27 32 650 4 F7340101
Quản trị kinh doanh (Chuyên ngành: Quản trị nguồn nhân lực) - Chương trình Chất
lượng cao 35.5 32.7 33 700 5 F7340101N Quản trị kinh doanh (Chuyên ngành: Quản
trị nhà hàng - khách sạn) - Chương trình Chất lượng cao 33 29.1 32 700 6 F7340115
Marketing - Chương trình Chất lượng cao 36 33.5 35 750 7 F7340120 Kinh doanh quốc
tế - Chương trình Chất lượng cao 36.5 32.8 36 750 8 F7340201 Tài chính - Ngân
hàng - Chương trình Chất lượng cao 33 30.1 32 700 9 F7340301 Kế toán - Chương
trình Chất lượng cao 31 29.2 32 650 10 F7380101 Luật - Chương trình Chất lượng
cao 32 32.1 32 650 11 F7420201 Công nghệ sinh học - Chương trình Chất lượng cao
27 22 30 650 12 F7480101 Khoa học máy tính - Chương trình Chất lượng cao 36.25
34.5 32 800 13 F7480103 Kỹ thuật phần mềm - Chương trình Chất lượng cao 36.25
34.5 32 800 14 F7520201 Kỹ thuật điện - Chương trình Chất lượng cao 27 22 30 650
15 F7520207 Kỹ thuật điện tử - viễn thông - Chương trình Chất lượng cao 27 22
30 650 16 F7520216 Kỹ thuật điều khiển và tự động hóa - Chương trình Chất lượng
cao 27 25 30 650 17 F7580201 Kỹ thuật xây dựng - Chương trình Chất lượng cao 27
22 30 650 CHƯƠNG TRÌNH ĐẠI HỌC BẰNG TIẾNG ANH
Yêu cầu về tiếng Anh đầu vào:
Thí sinh nước ngoài ở các nước có ngôn ngữ chính là tiếng Anh không yêu cầu Chứng
chỉ tiếng Anh đầu vào quốc tế;
Thí sinh Việt Nam và thí sinh ở các nước không có ngôn ngữ chính là tiếng Anh:
phải có Chứng chỉ IELTS 5.0 trở lên hoặc tương đương (có giá trị từ ngày 01/10/2020
và còn giá trị đến ngày 01/10/2022); hoặc phải dự thi đánh giá năng lực tiếng
Anh bằng Hệ thống đánh giá năng lực tiếng Anh theo chuẩn quốc tế của TDTU để được
xác nhận đủ điều kiện tiếng Anh theo học chương trình (trừ Ngành ngôn ngữ Anh
phải có chứng chỉ tiếng Anh quốc tế tương đương IELTS 5.0 trở lên theo quy định).
Trường hợp số lượng học viên nhập học đủ điều kiện học chính thức ít hơn sĩ số
tối thiểu để mở lớp, người học được tư vấn để bảo lưu kết quả tuyển sinh, hoặc
chuyển qua các ngành/chương trình khác (nếu đáp ứng được tiêu chí tuyển đầu vào
của ngành/chương trình đó). Chương trình đại học bằng tiếng Anh:
STT Mã ngành Tên ngành Điểm TT PT1-Đ2 Điểm TT PT2 Điểm TT PT3-ĐT1-Đ2 Điểm TT PT3-ĐT2-Đ2
Điểm TT PT5 1 FA7220201 Ngôn ngữ Anh – Chương trình đại học bằng tiếng Anh 32
25 30 34.5 700 2 FA7310630Q Việt Nam học (Chuyên ngành Du lịch và Quản lý du lịch)
- Chương trình đại học bằng tiếng Anh 28 24 28 28 650 3 FA7340101N Quản trị kinh
doanh (Chuyên ngành: Quản trị nhà hàng - khách sạn) - Chương trình đại học bằng
tiếng Anh 30 27 30 30 650 4 FA7340115 Marketing - Chương trình đại học bằng tiếng
Anh 34 27 32 36 700 5 FA7340120 Kinh doanh quốc tế - Chương trình đại học bằng
tiếng Anh 34 27 32 36 700 6 FA7340201 Tài chính ngân hàng - Chương trình đại học
bằng tiếng Anh 28 24 28 28 650 7 FA7340301 Kế toán (Chuyên ngành: Kế toán quốc
tế) - Chương trình đại học bằng tiếng Anh 28 24 28 28 650 8 FA7420201 Công nghệ
sinh học - Chương trình đại học bằng tiếng Anh 28 24 28 28 650 9 FA7480101 Khoa
học máy tính - Chương trình đại học bằng tiếng Anh 30 24 30 30 650 10 FA7480103
Kỹ thuật phần mềm - Chương trình đại học bằng tiếng Anh 30 24 30 30 650 11 FA7520216
Kỹ thuật điều khiển và tự động hóa - Chương trình đại học bằng tiếng Anh 28 24
28 28 650 12 FA7580201 Kỹ thuật xây dựng - Chương trình đại học bằng tiếng Anh
28 24 28 28 650
Chương trình học tại Phân hiệu Khánh Hòa:
STT Mã ngành Tên ngành Điểm TT PT1-Đ2 Điểm TT PT2 Điểm TT PT3-ĐT1-Đ2 Điểm TT PT3-ĐT2-Đ2
Điểm TT PT5 1 N7220201 Ngôn ngữ Anh - Chương trình học Phân hiệu Khánh Hòa 28
24 31 650 2 N7310630 Việt Nam học (Chuyên ngành: Du lịch và lữ hành) - Chương
trình học Phân hiệu Khánh Hòa 27 22 30 650 3 N7340101N Quản trị kinh doanh, Chuyên
ngành: Quản trị nhà hàng - khách sạn - Chương trình học Phân hiệu Khánh Hòa 29
24 31 650 4 N7340115 Marketing - Chương trình học Phân hiệu Khánh Hòa 29 24 31
650 5 N7340301 Kế toán - Chương trình học Phân hiệu Khánh Hòa 27 22 30 650 6 N7380101
Luật - Chương trình học Phân hiệu Khánh Hòa 27 22 30 650 7 N7480103 Kỹ thuật phần
mềm - Chương trình học Phân hiệu Khánh Hòa 27 22 31 650 CHƯƠNG TRÌNH LIÊN KẾT
QUỐC TẾ
Yêu cầu về tiếng Anh đầu vào:
Thí sinh phải đạt trình độ tiếng Anh đầu vào từ B2 trở lên hoặc tương đương để
được công nhận trúng tuyển vào chương trình chính thức.Thí sinh có thể nộp chứng
chỉ IELTS 5.5 hoặc các chứng chỉ quốc tế tương đương để xét tiếng Anh đầu vào;
hoặc phải dự thi đánh giá năng lực tiếng Anh đầu khóa bằng Hệ thống đánh giá năng
lực tiếng Anh theo chuẩn quốc tế của TDTU để được xác nhận đủ điều kiện tiếng
Anh theo học chương trình. Ngoại lệ:
Nếu tiếng Anh chưa đạt chuẩn B2, nhưng người học vẫn muốn học chương trình liên
kết đào tạo quốc tế, thì được xét vào chương trình dự bị tiếng Anh (liên kết quốc
tế) và phải tham gia học bổ túc tiếng Anh tại TDTU cho đến khi đạt trình độ tương
đương chuẩn nói trên để được “quyết định nhập học và công nhận là sinh viên”.
Thời gian học tiếng Anh tối đa là 2 năm và tùy năng lực đầu vào qua kết quả đánh
giá đầu vào xếp lớp của TDTU. Sau thời gian học chương trình dự bị tiếng Anh,
nếu vẫn chưa đạt chuẩn tiếng Anh trình độ B2 hoặc tương đương; người học phải
thôi học hoặc có thể xin chuyển sang các chương trình khác (nếu vẫn bảo đảm được
các tiêu chí tuyển sinh đầu vào tương ứng của các ngành/chương trình này theo
đúng năm tuyển sinh ). Trường hợp số lượng học viên nhập học đủ điều kiện học
chính thức ít hơn sĩ số tối thiểu để mở lớp, người học được tư vấn để bảo lưu
kết quả tuyển sinh, hoặc chuyển qua các ngành/chương trình khác (nếu đáp ứng được
tiêu chí tuyển đầu vào của ngành/chương trình đó). STT Mã ngành Tên ngành Điểm
TT PT1-Đ2 Điểm TT PT2 Điểm TT PT3-ĐT1-Đ2 Điểm TT PT3-ĐT2-Đ2 Điểm TT PT5 1 K7340101
Quản trị kinh doanh (song bằng, 2+2) - Chương trình liên kết Đại học Kinh tế Praha
(Cộng hòa Séc) 28 24 28 28 650 2 K7340101N Quản trị nhà hàng khách sạn (song bằng,
2.5+1.5) - Chương trình liên kết Đại học Taylor''s (Malaysia) 28 24 28 28 650
3 K7340120 Quản trị kinh doanh quốc tế (đơn bằng, 3+1) - Chương trình liên kết
Đại học Khoa học và công nghệ Lunghwa (Đài Loan) 28 24 28 28 650 4 K7340201 Tài
chính (song bằng, 2+2) - Chương trình liên kết Đại học Feng Chia (Đài Loan) 28
24 28 28 650 5 K7340201S Tài chính (đơn bằng, 3+1) - Chương trình liên kết Đại
học Khoa học và công nghệ Lunghwa (Đài Loan) 28 24 28 28 650 6 K7340201X Tài chính
và kiểm soát (song bằng, 3+1) - Chương trình liên kết Đại học Khoa học ứng dụng
Saxion (Hà Lan) 28 24 28 28 650 7 K7340301 Kế toán (song bằng, 3+1) - Chương trình
liên kết Đại học West of England, Bristol (Anh) 28 24 28 28 650 8 K7480101 Khoa
học máy tính & Công nghệ tin học (đơn bằng, 2+2) - Chương trình liên kết Đại học
Khoa học và công nghệ Lunghwa (Đài Loan) 28 24 28 28 650 9 K7480101L Công nghệ
thông tin (song bằng, 2+2) - Chương trình liên kết Đại học La Trobe (Úc) 28 24
28 28 650 10 K7520201 Kỹ thuật điện – điện tử (song bằng, 2.5+1.5) - Chương trình
liên kết Đại học Khoa học ứng dụng Saxion (Hà Lan) 28 24 28 28 650 11 K7580201
Kỹ thuật xây dựng (song bằng, 2+2) - Chương trình liên kết Đại học La Trobe (Úc)
28 24 28 28 650 Đính kèm phụ lục điểm trúng tuyển chi tiết theo từng phương thức
Phụ lục điểm trúng tuyển chi tiết phương thức 1-đợt 2 (tại đây)
Phụ lục điểm trúng tuyển chi tiết phương thức 2 (tại đây)
Phụ lục điểm trúng tuyển chi tiết phương thức 3-đợt 2 (tại đây)
Thí sinh tra cứu kết quả trúng tuyển từ 17h ngày 17/9/2022 tại website https://tracuuxettuyen.tdtu.edu.vn
Lưu ý: Thí sinh đủ điểm trúng tuyển của TDTU công bố nhưng không có trong danh
sách trúng tuyển chính thức có thể do thí sinh đã đăng ký không chính xác nguyện
vọng trên hệ thống Bộ GD&ĐT hoặc đã trúng tuyển ở nguyện vọng khác có thứ tự ưu
tiên cao hơn.'
- 'Đại học Kinh tế Thành phố Hồ Chí Minh (UEH)
Đại học Kinh tế Thành phố Hồ Chí Minh (tiếng Anh: University of Economics Ho Chi
Minh City – UEH), còn được gọi là Đại học UEH, là một đại học công lập đa ngành
trực thuộc Bộ Giáo dục và Đào tạo. UEH nằm trong nhóm các trường đại học trọng
điểm quốc gia, dẫn đầu trong đào tạo khối ngành kinh tế tại Việt Nam. UEH không
chỉ là một trụ cột quan trọng trong hệ thống giáo dục bậc cao mà còn là trung
tâm nghiên cứu các chính sách kinh tế và quản lý cho chính phủ cùng các doanh
nghiệp lớn. UEH đã đào tạo nhiều lãnh đạo cấp cao cho các tập đoàn đa quốc gia
nổi tiếng trong và ngoài nước. Lịch sử hình thành và phát triển
1976: Thành lập với tên gọi Trường Đại học Kinh tế trực thuộc Bộ Đại học và Trung
học chuyên nghiệp. 1996: Sáp nhập với hai đơn vị khác, trở thành Trường Đại học
Kinh tế trực thuộc Đại học Quốc gia Thành phố Hồ Chí Minh. 2000: Tách ra khỏi
Đại học Quốc gia Thành phố Hồ Chí Minh, trở thành Trường Đại học Kinh tế Thành
phố Hồ Chí Minh trực thuộc Bộ Giáo dục và Đào tạo. 2021: Tái cấu trúc, thành lập
các trường thành viên và định hướng phát triển thành đại học đa ngành, đa lĩnh
vực. 2023: Chính thức chuyển đổi thành Đại học Kinh tế Thành phố Hồ Chí Minh.
Cơ sở vật chất và hoạt động
Hiện nay, UEH sở hữu: - 10 cơ sở giảng dạy tại Thành phố Hồ Chí Minh.'
- '4. CÁC NGÀNH ĐÀO TẠO
a. ĐẠI HỌC
Cử nhân Sư phạm Toán học (Hệ Chính quy, Hệ Vừa làm vừa học)
b.SAU ĐẠI HỌC
Thạc sĩ Toán giải tích
Thạc sĩ Đại số và Lý thuyết số
Thạc sĩ Hình học và Tôpô
Thạc sĩ Lý luận và Phương pháp dạy học bộ môn Toán
Tiến sĩ Toán Giải tích
Tiến sĩ Hình học và Tôpô
Tiến sĩ Lý luận và Phương pháp dạy học bộ môn Toán
c. BỒI DƯỠNG
Chuyên đề bồi dưỡng cho giáo viên tiểu học, trung học cơ sở và trung học phổ thông
về phương pháp, kĩ thuật dạy học, nội dung dạy học, kiểm tra, đánh giá, ứng dụng
công nghệ thông tin trong dạy học,…
vi. Khoa Công nghệ Thông tin
1. CHẤT LƯỢNG ĐÀO TẠO
ĐÀO TẠO CỬ NHÂN (4 NĂM)
Sư phạm Tin học: 90 – 100 SV/năm
Công nghệ Thông tin: 180 – 200 SV/năm
ĐÀO TẠO CAO HỌC (2 NĂM)
Thạc sĩ Khoa học máy tính: 15-35 HV/ năm
2. CHẤT LƯỢNG GIẢNG VIÊN
ĐỘI NGŨ GIẢNG VIÊN: 24
Tiến sĩ: 9
Thạc sĩ: 15
3. MỤC TIÊU ĐÀO TẠO
Đào tạo giáo viên dạy Tin học bậc phổ thông có trình độ cử nhân Sư phạm Tin học,
có phẩm chất chính trị, đạo đức và sức khỏe tốt, hiểu và vận dụng các tri thức
cơ bản của Tin học; Lý luận và phương pháp giảng dạy Tin học ở trường trung học,
tiểu học. Sau khi tốt nghiệp, người học có đủ năng lực để giảng dạy Tin học tại
các trường trung học, tiểu học và một số cơ sở giáo dục tương đương. Đào tạo cử
nhân Công nghệ thông tin, có phẩm chất chính trị, đạo đức và sức khỏe tốt, hiểu
và vận dụng các tri thức cơ bản về khoa học máy tính. Sau khi tốt nghiệp, người
học có đủ năng lực để làm việc trong môi trường các cơ sở sản xuất, các viện hoặc
trung tâm nghiên cứu trong lĩnh vực Công nghệ thông tin cũng như có thể tiếp tục
theo các bậc học cao hơn.'
- source_sentence: Xin hãy liệt kê các trung tâm của Trường Đại học Sư phạm Kỹ thuật
TP. Hồ Chí Minh.
sentences:
- 'Nếu có thắc mắc thí sinh vui lòng liên hệ số điện thoại hỗ trợ tuyển sinh: 19002024'
- 'Thực hiện hướng dẫn của Bộ Giáo dục và Đào tạo tại Công văn số 1919/BGDĐT-GDĐH
ngày 28 tháng 4 năm 2023, phương thức xét tuyển kết quả điểm thi tốt nghiệp Trung
học phổ thông vẫn được giữ nguyên như năm 2022. Tổ hợp môn xét tuyển: B00 (Toán
– Hóa – Sinh) chung cho tất cả các ngành. năm 2022, Trường Đại học Y khoa Phạm
Ngọc Thạch tuyển được 1.367 chỉ tiêu (đạt 104,4% so với chỉ tiêu đề ra). chỉ tiêu
tuyển sinh đại học chính quy của Trường Đại học Y khoa Phạm Ngọc Thạch năm 2023.
1. Y khoa: 660 2. Dược học: 90 3. Điều dưỡng: 250 4. Dinh dưỡng: 60 5. Răng Hàm
Mặt: 90 6. Kỹ thuật xét nghiệm y học: 50 7. Kỹ thuật hình ảnh y học: 40 8. Kỹ
thuật phục hồi chức năng: 30 9. Khúc xạ nhãn khoa: 40 10. Y tế công cộng: 56
Ghi chú: chỉ tiêu được chia cho các thí sinh có hộ khẩu ở TP HCM và ngoài TP HCM
với tỉ lệ 50%
Điểm chuẩn của trường Đại học Y khoa Phạm Ngọc Thạch 2023: Y khoa, Điểm chuẩn
thí sinh có hộ khẩu tại TP HCM(TP): 25,90, Điểm chuẩn thí sinh có hộ khẩu ngoài
TP HCM(TQ): 26.31 Dược học, Điểm chuẩn thí sinh có hộ khẩu tại TP HCM(TP): 25,28,
Điểm chuẩn thí sinh có hộ khẩu ngoài TP HCM(TQ): 25,25 Điều dưỡng, Điểm chuẩn
thí sinh có hộ khẩu tại TP HCM(TP): 22,40, Điểm chuẩn thí sinh có hộ khẩu ngoài
TP HCM(TQ): 22,40 Dinh dưỡng, Điểm chuẩn thí sinh có hộ khẩu tại TP HCM(TP): 22,25,
Điểm chuẩn thí sinh có hộ khẩu ngoài TP HCM(TQ): 22,80 Răng - Hàm - Mặt, Điểm
chuẩn thí sinh có hộ khẩu tại TP HCM(TP): 26,00, Điểm chuẩn thí sinh có hộ khẩu
ngoài TP HCM(TQ): 26,28 Kỹ thuật Xét nghiệm Y học, Điểm chuẩn thí sinh có hộ khẩu
tại TP HCM(TP): 24,54, Điểm chuẩn thí sinh có hộ khẩu ngoài TP HCM(TQ): 24,47
Kỹ thuật Hình ảnh Y học, Điểm chuẩn thí sinh có hộ khẩu tại TP HCM(TP): 23,45,
Điểm chuẩn thí sinh có hộ khẩu ngoài TP HCM(TQ): 23,61 Khúc xạ nhãn khoa, Điểm
chuẩn thí sinh có hộ khẩu tại TP HCM(TP): 23,75, Điểm chuẩn thí sinh có hộ khẩu
ngoài TP HCM(TQ): 23,75 Y tế công cộng, Điểm chuẩn thí sinh có hộ khẩu tại TP
HCM(TP): 18,85, Điểm chuẩn thí sinh có hộ khẩu ngoài TP HCM(TQ): 18,35 Kỹ thuật
Phục hồi chức năng, Điểm chuẩn thí sinh có hộ khẩu tại TP HCM(TP): 23,15, Điểm
chuẩn thí sinh có hộ khẩu ngoài TP HCM(TQ): 23,09'
- 'Phòng Đào tạo
2. Phòng Đào tạo không chính quy
3. Phòng Tuyển sinh và Công tác Sinh viên
4. Phòng Truyền thông
5. Phòng Khoa học Công nghệ - Quan hệ Quốc tế
6. Phòng Quan hệ Doanh nghiệp
7. Phòng Thanh tra - Giáo dục
8. Phòng Đảm bảo Chất lượng
9. Phòng Tổ chức - Hành chính
10. Phòng Kế hoạch - Tài chính
11. Phòng Quản trị Cơ sở Vật chất
12. Phòng Thiết bị - Vật tư
13. Ban quản lý KTX
14. Trạm Y tế
15. Bộ phận Quản lý Hồ sơ Dự án
C. Danh sách các trung tâm của Trường Đại học Sư phạm Kỹ thuật Thành phố Hồ Chí
Minh:
1. Ngoại ngữ
2. Tin học
3. Thư viện
4. Hợp tác Đào tạo Quốc tế
5. Việt – Đức
6. Dịch vụ Sinh viên
7. Thông tin – Máy tính
8. Dạy học số
9. Kỹ thuật Tổng hợp
10. Chế tạo và Thiết kế Thiết bị Công nghiệp
11. Đào tạo và hướng nghiệp quốc tế Việt Nhật
12. Đào tạo ngắn hạn
13. Giáo dục Thể chất - Quốc phòng
14. Đào tạo Bồi dưỡng giáo viên phổ thông, giáo dục nghề nghiệp miền Trung - Tây
Nguyên
15. Nghiên cứu và Ứng dụng Kỹ thuật Xây dựng
16. Bồi dưỡng và Đánh giá kỹ năng nghề Quốc gia
17. Phát triển ngôn ngữ
18. Nghiên cứu và Chuyển giao Công nghệ
19. Công nghệ phần mềm
20. Hàn ngữ học Dong A
21. Sáng tạo và Khởi nghiệp
22. Trung tâm hướng nghiệp và đào tạo Việt Nhật
D. Các ngành đào tạo trình độ đại học
Đi cùng với sự vận động và phát triển của nền kinh tế đất nước theo hướng công
nghiệp hóa, hiện đại hóa, Trường Đại học Sư phạm Kỹ thuật Tp. Hồ Chí Minh đã tiếp
cận thực tế để mở rộng đào tạo gần 30 ngành đào tạo trình độ đại học
i.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on hiieu/halong_embedding
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 768
type: dim_768
metrics:
- type: cosine_accuracy@1
value: 0.7010869565217391
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9307065217391305
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9735054347826086
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.998641304347826
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7010869565217391
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.31023550724637683
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19470108695652172
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09986413043478261
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7010869565217391
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9307065217391305
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9735054347826086
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.998641304347826
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8630713876112971
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.817977376639062
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8180731029236464
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 512
type: dim_512
metrics:
- type: cosine_accuracy@1
value: 0.7133152173913043
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9470108695652174
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9782608695652174
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.9972826086956522
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7133152173913043
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.31567028985507245
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.1956521739130435
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09972826086956521
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7133152173913043
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9470108695652174
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9782608695652174
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.9972826086956522
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8714349553748232
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8291790674603184
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8293969391116128
name: Cosine Map@100
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: dim 256
type: dim_256
metrics:
- type: cosine_accuracy@1
value: 0.7282608695652174
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 0.9436141304347826
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 0.9850543478260869
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 0.998641304347826
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.7282608695652174
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.31453804347826086
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19701086956521738
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09986413043478261
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.7282608695652174
name: Cosine Recall@1
- type: cosine_recall@3
value: 0.9436141304347826
name: Cosine Recall@3
- type: cosine_recall@5
value: 0.9850543478260869
name: Cosine Recall@5
- type: cosine_recall@10
value: 0.998641304347826
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.8785451406605149
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.8381901311249138
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.8383085023370349
name: Cosine Map@100
---
# SentenceTransformer based on hiieu/halong_embedding
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [hiieu/halong_embedding](https://huggingface.co/hiieu/halong_embedding). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [hiieu/halong_embedding](https://huggingface.co/hiieu/halong_embedding) <!-- at revision b57776031035f70ed2030d2e35ecc533eb0f8f71 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'XLMRobertaModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("HoangVuSnape/halong_embedding_pr_v2_ep30")
# Run inference
sentences = [
'Xin hãy liệt kê các trung tâm của Trường Đại học Sư phạm Kỹ thuật TP. Hồ Chí Minh.',
'Phòng Đào tạo\n\n2. Phòng Đào tạo không chính quy\n\n3. Phòng Tuyển sinh và Công tác Sinh viên\n\n4. Phòng Truyền thông\n\n5. Phòng Khoa học Công nghệ - Quan hệ Quốc tế\n\n6. Phòng Quan hệ Doanh nghiệp\n\n7. Phòng Thanh tra - Giáo dục\n\n8. Phòng Đảm bảo Chất lượng\n\n9. Phòng Tổ chức - Hành chính\n\n10. Phòng Kế hoạch - Tài chính\n\n11. Phòng Quản trị Cơ sở Vật chất\n\n12. Phòng Thiết bị - Vật tư\n\n13. Ban quản lý KTX\n\n14. Trạm Y tế\n\n15. Bộ phận Quản lý Hồ sơ Dự án\n\nC. Danh sách các trung tâm của Trường Đại học Sư phạm Kỹ thuật Thành phố Hồ Chí Minh:\n\n1. Ngoại ngữ\n\n2. Tin học\n\n3. Thư viện\n\n4. Hợp tác Đào tạo Quốc tế\n\n5. Việt – Đức\n\n6. Dịch vụ Sinh viên\n\n7. Thông tin – Máy tính\n\n8. Dạy học số\n\n9. Kỹ thuật Tổng hợp\n\n10. Chế tạo và Thiết kế Thiết bị Công nghiệp\n\n11. Đào tạo và hướng nghiệp quốc tế Việt Nhật\n\n12. Đào tạo ngắn hạn\n\n13. Giáo dục Thể chất - Quốc phòng\n\n14. Đào tạo Bồi dưỡng giáo viên phổ thông, giáo dục nghề nghiệp miền Trung - Tây Nguyên\n\n15. Nghiên cứu và Ứng dụng Kỹ thuật Xây dựng\n\n16. Bồi dưỡng và Đánh giá kỹ năng nghề Quốc gia\n\n17. Phát triển ngôn ngữ\n\n18. Nghiên cứu và Chuyển giao Công nghệ\n\n19. Công nghệ phần mềm\n\n20. Hàn ngữ học Dong A\n\n21. Sáng tạo và Khởi nghiệp\n\n22. Trung tâm hướng nghiệp và đào tạo Việt Nhật\n\nD. Các ngành đào tạo trình độ đại học\n\nĐi cùng với sự vận động và phát triển của nền kinh tế đất nước theo hướng công nghiệp hóa, hiện đại hóa, Trường Đại học Sư phạm Kỹ thuật Tp. Hồ Chí Minh đã tiếp cận thực tế để mở rộng đào tạo gần 30 ngành đào tạo trình độ đại học\n\ni.',
'Thực hiện hướng dẫn của Bộ Giáo dục và Đào tạo tại Công văn số 1919/BGDĐT-GDĐH ngày 28 tháng 4 năm 2023, phương thức xét tuyển kết quả điểm thi tốt nghiệp Trung học phổ thông vẫn được giữ nguyên như năm 2022. Tổ hợp môn xét tuyển: B00 (Toán – Hóa – Sinh) chung cho tất cả các ngành. năm 2022, Trường Đại học Y khoa Phạm Ngọc Thạch tuyển được 1.367 chỉ tiêu (đạt 104,4% so với chỉ tiêu đề ra). chỉ tiêu tuyển sinh đại học chính quy của Trường Đại học Y khoa Phạm Ngọc Thạch năm 2023. 1. Y khoa: 660 2. Dược học: 90 3. Điều dưỡng: 250 4. Dinh dưỡng: 60 5. Răng Hàm Mặt: 90 6. Kỹ thuật xét nghiệm y học: 50 7. Kỹ thuật hình ảnh y học: 40 8. Kỹ thuật phục hồi chức năng: 30 9. Khúc xạ nhãn khoa: 40 10. Y tế công cộng: 56\n\nGhi chú: chỉ tiêu được chia cho các thí sinh có hộ khẩu ở TP HCM và ngoài TP HCM với tỉ lệ 50%\n\nĐiểm chuẩn của trường Đại học Y khoa Phạm Ngọc Thạch 2023: Y khoa, Điểm chuẩn thí sinh có hộ khẩu tại TP HCM(TP): 25,90, Điểm chuẩn thí sinh có hộ khẩu ngoài TP HCM(TQ): 26.31 Dược học, Điểm chuẩn thí sinh có hộ khẩu tại TP HCM(TP): 25,28, Điểm chuẩn thí sinh có hộ khẩu ngoài TP HCM(TQ): 25,25 Điều dưỡng, Điểm chuẩn thí sinh có hộ khẩu tại TP HCM(TP): 22,40, Điểm chuẩn thí sinh có hộ khẩu ngoài TP HCM(TQ): 22,40 Dinh dưỡng, Điểm chuẩn thí sinh có hộ khẩu tại TP HCM(TP): 22,25, Điểm chuẩn thí sinh có hộ khẩu ngoài TP HCM(TQ): 22,80 Răng - Hàm - Mặt, Điểm chuẩn thí sinh có hộ khẩu tại TP HCM(TP): 26,00, Điểm chuẩn thí sinh có hộ khẩu ngoài TP HCM(TQ): 26,28 Kỹ thuật Xét nghiệm Y học, Điểm chuẩn thí sinh có hộ khẩu tại TP HCM(TP): 24,54, Điểm chuẩn thí sinh có hộ khẩu ngoài TP HCM(TQ): 24,47 Kỹ thuật Hình ảnh Y học, Điểm chuẩn thí sinh có hộ khẩu tại TP HCM(TP): 23,45, Điểm chuẩn thí sinh có hộ khẩu ngoài TP HCM(TQ): 23,61 Khúc xạ nhãn khoa, Điểm chuẩn thí sinh có hộ khẩu tại TP HCM(TP): 23,75, Điểm chuẩn thí sinh có hộ khẩu ngoài TP HCM(TQ): 23,75 Y tế công cộng, Điểm chuẩn thí sinh có hộ khẩu tại TP HCM(TP): 18,85, Điểm chuẩn thí sinh có hộ khẩu ngoài TP HCM(TQ): 18,35 Kỹ thuật Phục hồi chức năng, Điểm chuẩn thí sinh có hộ khẩu tại TP HCM(TP): 23,15, Điểm chuẩn thí sinh có hộ khẩu ngoài TP HCM(TQ): 23,09',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[ 1.0000, 0.6708, -0.0627],
# [ 0.6708, 1.0000, -0.0218],
# [-0.0627, -0.0218, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Dataset: `dim_768`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 768
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7011 |
| cosine_accuracy@3 | 0.9307 |
| cosine_accuracy@5 | 0.9735 |
| cosine_accuracy@10 | 0.9986 |
| cosine_precision@1 | 0.7011 |
| cosine_precision@3 | 0.3102 |
| cosine_precision@5 | 0.1947 |
| cosine_precision@10 | 0.0999 |
| cosine_recall@1 | 0.7011 |
| cosine_recall@3 | 0.9307 |
| cosine_recall@5 | 0.9735 |
| cosine_recall@10 | 0.9986 |
| **cosine_ndcg@10** | **0.8631** |
| cosine_mrr@10 | 0.818 |
| cosine_map@100 | 0.8181 |
#### Information Retrieval
* Dataset: `dim_512`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 512
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7133 |
| cosine_accuracy@3 | 0.947 |
| cosine_accuracy@5 | 0.9783 |
| cosine_accuracy@10 | 0.9973 |
| cosine_precision@1 | 0.7133 |
| cosine_precision@3 | 0.3157 |
| cosine_precision@5 | 0.1957 |
| cosine_precision@10 | 0.0997 |
| cosine_recall@1 | 0.7133 |
| cosine_recall@3 | 0.947 |
| cosine_recall@5 | 0.9783 |
| cosine_recall@10 | 0.9973 |
| **cosine_ndcg@10** | **0.8714** |
| cosine_mrr@10 | 0.8292 |
| cosine_map@100 | 0.8294 |
#### Information Retrieval
* Dataset: `dim_256`
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) with these parameters:
```json
{
"truncate_dim": 256
}
```
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.7283 |
| cosine_accuracy@3 | 0.9436 |
| cosine_accuracy@5 | 0.9851 |
| cosine_accuracy@10 | 0.9986 |
| cosine_precision@1 | 0.7283 |
| cosine_precision@3 | 0.3145 |
| cosine_precision@5 | 0.197 |
| cosine_precision@10 | 0.0999 |
| cosine_recall@1 | 0.7283 |
| cosine_recall@3 | 0.9436 |
| cosine_recall@5 | 0.9851 |
| cosine_recall@10 | 0.9986 |
| **cosine_ndcg@10** | **0.8785** |
| cosine_mrr@10 | 0.8382 |
| cosine_map@100 | 0.8383 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 1,472 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 9 tokens</li><li>mean: 25.49 tokens</li><li>max: 62 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 356.38 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:-----------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>Ngành Quản lý Tài nguyên và Môi trường trang bị cho sinh viên những kiến thức và kỹ năng gì?</code> | <code>Sau khi tốt nghiệp, người học sẽ:<br><br>Có kiến thức cơ bản về toán học, khoa học tự nhiên, đáp ứng cho việc tiếp thu các kiến thức giáo dục chuyên nghiệp và khả năng học tập ở trình độ cao hơn<br><br>Có các kiến thức kỹ thuật cơ sở ngành và chuyên ngành giúp đủ năng lực phát hiện, giải quyết các vấn đề liên quan đến công nghệ sản xuất, chế tạo và ứng dụng vật liệu vào trong xây dựng, kiểm soát chất lượng nguyên vật liệu và cấu kiện sản phẩm xây dựng, nghiên cứu sản xuất chế tạo và phát triển các loại vật liệu mới, hiện đại, tiên tiến, độc đáo, hiệu quả, xanh, bền vững… nhằm hướng tới sự phát triển bền vững trong công nghiệp xây dựng và kiến trúc, thiết kế và thi công trong các công trình xây dựng; có tính sáng tạo trong hoạt động nghề nghiệp, có khả năng tự học và tự nghiên cứu;<br><br>Có kỹ năng cá nhân, nghề nghiệp, giao tiếp, làm việc nhóm đủ để làm việc trong môi trường làm việc liên ngành, đa văn hóa;<br><br>Có hiểu biết về kinh tế, chính trị, có các kiến thức cơ bản trong lĩnh vực khoa học xã hội và n...</code> |
| <code>Chương trình Kỹ thuật Môi trường đào tạo sinh viên về những năng lực nào và có điểm gì nổi bật đối với chương trình giảng dạy bằng tiếng Anh?</code> | <code>Sau khi tốt nghiệp, người học sẽ:<br><br>Có kiến thức cơ bản về toán học, khoa học tự nhiên, đáp ứng cho việc tiếp thu các kiến thức giáo dục chuyên nghiệp và khả năng học tập ở trình độ cao hơn<br><br>Có các kiến thức kỹ thuật cơ sở ngành và chuyên ngành giúp đủ năng lực phát hiện, giải quyết các vấn đề liên quan đến công nghệ sản xuất, chế tạo và ứng dụng vật liệu vào trong xây dựng, kiểm soát chất lượng nguyên vật liệu và cấu kiện sản phẩm xây dựng, nghiên cứu sản xuất chế tạo và phát triển các loại vật liệu mới, hiện đại, tiên tiến, độc đáo, hiệu quả, xanh, bền vững… nhằm hướng tới sự phát triển bền vững trong công nghiệp xây dựng và kiến trúc, thiết kế và thi công trong các công trình xây dựng; có tính sáng tạo trong hoạt động nghề nghiệp, có khả năng tự học và tự nghiên cứu;<br><br>Có kỹ năng cá nhân, nghề nghiệp, giao tiếp, làm việc nhóm đủ để làm việc trong môi trường làm việc liên ngành, đa văn hóa;<br><br>Có hiểu biết về kinh tế, chính trị, có các kiến thức cơ bản trong lĩnh vực khoa học xã hội và n...</code> |
| <code>Ngành Kỹ thuật Dầu khí và Kỹ thuật Địa chất tập trung nghiên cứu và ứng dụng những lĩnh vực cốt lõi nào?</code> | <code>Các công ty nghiên cứu và khảo sát địa chất, tư vấn về nền móng công trình. Các tổ chức liên quan đến quy hoạch và phát triển đô thị. Kỹ thuật Dầu khí<br><br>Tổng quan<br><br>Kỹ thuật Dầu khí là ngành học chuyên nghiên cứu về các kỹ thuật khai thác, sản xuất và xử lý dầu khí. Sinh viên sẽ học các phương pháp khoan, khai thác dầu, khí tự nhiên, và xử lý các vấn đề kỹ thuật trong ngành dầu khí, từ việc tìm kiếm và khai thác tài nguyên cho đến việc tối ưu hóa quy trình sản xuất. CÁC ĐIỂM ĐẶC BIỆT<br><br>Khả năng ứng dụng cao: Sinh viên ngành Kỹ thuật Dầu khí sẽ được trang bị kiến thức thực tế về công nghệ khai thác dầu khí và các phương pháp tối ưu hóa sản xuất. Ngành công nghiệp chiến lược: Dầu khí vẫn là một trong những ngành công nghiệp mũi nhọn và cần nguồn nhân lực có trình độ cao trong việc khai thác và xử lý tài nguyên thiên nhiên. Triển vọng việc làm<br><br>Các công ty khai thác dầu khí trong nước và quốc tế. Các công ty tư vấn và kỹ thuật dầu khí, nghiên cứu các giải pháp tối ưu trong khai thác. Các côn...</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
12
],
"matryoshka_weights": [
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `gradient_accumulation_steps`: 8
- `learning_rate`: 2e-05
- `num_train_epochs`: 20
- `lr_scheduler_type`: cosine
- `warmup_ratio`: 0.1
- `bf16`: True
- `tf32`: True
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 8
- `load_best_model_at_end`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 8
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 20
- `max_steps`: -1
- `lr_scheduler_type`: cosine
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: True
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: True
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: True
- `dataloader_num_workers`: 8
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: True
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 |
|:-------:|:----:|:-------------:|:----------------------:|:----------------------:|:----------------------:|
| -1 | -1 | - | 0.4874 | 0.4819 | 0.4590 |
| 0.8696 | 10 | 4.9811 | 0.5533 | 0.5504 | 0.5373 |
| 1.6957 | 20 | 3.199 | 0.5998 | 0.5976 | 0.5848 |
| 2.5217 | 30 | 2.4565 | 0.6495 | 0.6470 | 0.6373 |
| 3.3478 | 40 | 1.9622 | 0.6775 | 0.6733 | 0.6717 |
| 4.1739 | 50 | 1.5012 | 0.7019 | 0.7008 | 0.6911 |
| 5.0 | 60 | 1.1534 | 0.7123 | 0.7118 | 0.7067 |
| 5.8696 | 70 | 1.1291 | 0.7201 | 0.7274 | 0.7238 |
| 6.6957 | 80 | 0.9064 | 0.7274 | 0.7338 | 0.7316 |
| 7.5217 | 90 | 0.9967 | 0.7309 | 0.7385 | 0.7373 |
| 8.3478 | 100 | 0.8916 | 0.7320 | 0.7429 | 0.7421 |
| 9.1739 | 110 | 0.8854 | 0.7330 | 0.7448 | 0.7425 |
| 10.0 | 120 | 0.8051 | 0.7325 | 0.7449 | 0.7417 |
| 0.8696 | 10 | 0.6824 | 0.7381 | 0.7455 | 0.7486 |
| 1.6957 | 20 | 0.5332 | 0.7441 | 0.7526 | 0.7540 |
| 2.5217 | 30 | 0.4923 | 0.7538 | 0.7624 | 0.7626 |
| 3.3478 | 40 | 0.4799 | 0.7648 | 0.7699 | 0.7720 |
| 4.1739 | 50 | 0.3966 | 0.7800 | 0.7844 | 0.7923 |
| 5.0 | 60 | 0.3537 | 0.7821 | 0.7855 | 0.7937 |
| 5.8696 | 70 | 0.4381 | 0.8018 | 0.8041 | 0.8087 |
| 6.6957 | 80 | 0.3841 | 0.8031 | 0.8075 | 0.8166 |
| 7.5217 | 90 | 0.4583 | 0.7995 | 0.8096 | 0.8167 |
| 8.3478 | 100 | 0.4325 | 0.8214 | 0.8290 | 0.8314 |
| 9.1739 | 110 | 0.4238 | 0.8328 | 0.8363 | 0.8389 |
| 10.0 | 120 | 0.3629 | 0.8389 | 0.8446 | 0.8487 |
| 10.8696 | 130 | 0.3197 | 0.8428 | 0.8492 | 0.8553 |
| 11.6957 | 140 | 0.3398 | 0.8484 | 0.8568 | 0.8613 |
| 12.5217 | 150 | 0.3145 | 0.8523 | 0.8609 | 0.8635 |
| 13.3478 | 160 | 0.3005 | 0.8540 | 0.8611 | 0.8680 |
| 14.2609 | 170 | 0.3277 | 0.8571 | 0.8636 | 0.8743 |
| 15.1739 | 180 | 0.3455 | 0.8600 | 0.8678 | 0.8765 |
| 16.0 | 190 | 0.3061 | 0.8591 | 0.8668 | 0.8753 |
| 16.8696 | 200 | 0.2603 | 0.8603 | 0.8687 | 0.8763 |
| 17.6957 | 210 | 0.28 | 0.8605 | 0.8697 | 0.8776 |
| 18.5217 | 220 | 0.3435 | 0.8628 | 0.8705 | 0.8785 |
| 19.3478 | 230 | 0.2589 | 0.8631 | 0.8714 | 0.8785 |
### Framework Versions
- Python: 3.10.12
- Sentence Transformers: 5.1.0
- Transformers: 4.55.2
- PyTorch: 2.8.0+cu128
- Accelerate: 1.10.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
KCS97/cat2
|
KCS97
| 2025-08-19T05:54:41Z
| 0
| 0
|
diffusers
|
[
"diffusers",
"tensorboard",
"safetensors",
"text-to-image",
"dreambooth",
"diffusers-training",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:stable-diffusion-v1-5/stable-diffusion-v1-5",
"base_model:finetune:stable-diffusion-v1-5/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] |
text-to-image
| 2025-08-19T05:45:07Z
|
---
base_model: stable-diffusion-v1-5/stable-diffusion-v1-5
library_name: diffusers
license: creativeml-openrail-m
inference: true
instance_prompt: a photo of sks cat
tags:
- text-to-image
- dreambooth
- diffusers-training
- stable-diffusion
- stable-diffusion-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# DreamBooth - KCS97/cat2
This is a dreambooth model derived from stable-diffusion-v1-5/stable-diffusion-v1-5. The weights were trained on a photo of sks cat using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
mang3dd/blockassist-bc-tangled_slithering_alligator_1755581149
|
mang3dd
| 2025-08-19T05:54:00Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"tangled slithering alligator",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T05:53:57Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- tangled slithering alligator
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
wasabuko/blockassist-bc-noisy_zealous_macaw_1755580421
|
wasabuko
| 2025-08-19T05:51:01Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"noisy zealous macaw",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T05:48:16Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- noisy zealous macaw
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
shirley-arica/Sophie
|
shirley-arica
| 2025-08-19T05:50:09Z
| 0
| 0
| null |
[
"region:us"
] | null | 2025-08-19T05:49:39Z
|
<!-- HTML_TAG_END --><div>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a rel="nofollow" href="https://leaked-videos.com/?v=Sophie+Rain+Spiderman"><img src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif" alt="fsd"></a></p>
<!-- HTML_TAG_END --></div>
|
0xaoyama/blockassist-bc-muscular_zealous_gorilla_1755582552
|
0xaoyama
| 2025-08-19T05:49:49Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"muscular zealous gorilla",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T05:49:37Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- muscular zealous gorilla
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755582344
|
IvanJAjebu
| 2025-08-19T05:47:06Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T05:46:56Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
taochengfei/llama-3.2-3b-it-beta_assistant_v0.2_gptq
|
taochengfei
| 2025-08-19T05:46:23Z
| 0
| 0
|
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-08-19T05:45:10Z
|
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
VoilaRaj/78_PKal99
|
VoilaRaj
| 2025-08-19T05:46:19Z
| 0
| 0
| null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-08-19T05:42:07Z
|
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Sayemahsjn/blockassist-bc-playful_feline_octopus_1755581182
|
Sayemahsjn
| 2025-08-19T05:45:45Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"playful feline octopus",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T05:45:40Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- playful feline octopus
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ihsanridzi/blockassist-bc-wiry_flexible_owl_1755580729
|
ihsanridzi
| 2025-08-19T05:45:26Z
| 0
| 0
| null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry flexible owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-19T05:45:22Z
|
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry flexible owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.