modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
MohamedAhmedAE/Llama-3.2-3B-Instruct-Medical-Finetune-v3
|
MohamedAhmedAE
| 2025-08-11T17:13:52Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-20T22:48:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AnhuiNN/mistral-7b-dolly
|
AnhuiNN
| 2025-08-11T16:18:29Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T16:09:21Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1754927178
|
kapalbalap
| 2025-08-11T15:47:17Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T15:47:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ArkaDio81/qwen_ohwx_man
|
ArkaDio81
| 2025-08-11T15:46:26Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T15:44:41Z |
---
license: apache-2.0
---
|
akhadangi/RLDP
|
akhadangi
| 2025-08-11T15:17:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation",
"en",
"arxiv:2507.22565",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-31T06:25:42Z |
---
library_name: transformers
license: apache-2.0
language:
- en
base_model:
- mistralai/Mistral-7B-v0.1
pipeline_tag: text-generation
---
## Model Details
- **Model Type:** Fine tuned of mistralai/Mistral-7B-v0.1 using [RLDP](https://arxiv.org/abs/2507.22565)
- **Original Model:** mistralai/Mistral-7B-v0.1
- **Architecture:** Same as original model
- **Language(s):** Same as original model
- **License:** Same as original model
- **Developed by:** [Afshin Khadangi](https://huggingface.co/akhadangi)
|
Artak472/Pig
|
Artak472
| 2025-08-11T15:10:40Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T15:10:40Z |
---
license: apache-2.0
---
|
Mattimax/DAC4.2-Q4_K_M-GGUF
|
Mattimax
| 2025-08-11T14:58:55Z | 0 | 0 | null |
[
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:Mattimax/DAC4.2",
"base_model:quantized:Mattimax/DAC4.2",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T14:58:45Z |
---
license: gpl-3.0
tags:
- llama-cpp
- gguf-my-repo
base_model: Mattimax/DAC4.2
---
# Mattimax/DAC4.2-Q4_K_M-GGUF
This model was converted to GGUF format from [`Mattimax/DAC4.2`](https://huggingface.co/Mattimax/DAC4.2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Mattimax/DAC4.2) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Mattimax/DAC4.2-Q4_K_M-GGUF --hf-file dac4.2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Mattimax/DAC4.2-Q4_K_M-GGUF --hf-file dac4.2-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Mattimax/DAC4.2-Q4_K_M-GGUF --hf-file dac4.2-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Mattimax/DAC4.2-Q4_K_M-GGUF --hf-file dac4.2-q4_k_m.gguf -c 2048
```
|
yasserrmd/SoftwareArchitecture-Instruct-v1
|
yasserrmd
| 2025-08-11T14:32:24Z | 0 | 2 |
transformers
|
[
"transformers",
"safetensors",
"lfm2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"dataset:ajibawa-2023/Software-Architecture",
"base_model:unsloth/LFM2-1.2B",
"base_model:finetune:unsloth/LFM2-1.2B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T13:41:55Z |
---
base_model: unsloth/LFM2-1.2B
tags:
- text-generation-inference
- transformers
- unsloth
- lfm2
license: apache-2.0
language:
- en
datasets:
- ajibawa-2023/Software-Architecture
---
# SoftwareArchitecture-Instruct-v1
<img src="banner.png" width="800"/>
**Domain:** Software Architecture (for technical professionals)
**Type:** Instruction-tuned LLM
**Base:** LiquidAI/LFM2-1.2B (1.2 B parameter hybrid edge-optimized model) :contentReference[oaicite:1]{index=1}
**Fine-tuned on:** `ajibawa-2023/Software-Architecture` dataset
**Author:** Mohamed Yasser (`yasserrmd`)
---
## Model Description
**SoftwareArchitecture-Instruct-v1** is an instruction-tuned adaptation of LiquidAI’s lightweight and efficient **LFM2-1.2B** model. It’s specifically tailored to deliver high-quality, accurate, and technically rich responses to questions about **software architecture**—designed with engineers and architects in mind.
The base model, LFM2-1.2B, features a **16-layer hybrid design** (10 convolutional + 6 grouped query attention layers), supports a **32,768 token context**, and offers **fast inference on CPU, GPU, and NPU** platforms—ideal for both cloud and edge deployments :contentReference[oaicite:2]{index=2}.
---
## Benchmark Summary
We performed a 50-prompt benchmark across diverse software architecture topics:
| Metric | Value |
|------------------------------|----------------------|
| Average Words per Response | ~144 |
| Median Words per Response | ~139 |
| Min / Max Words per Response | 47 / 224 |
| Avg Sentences per Output | ~8.6 |
| Lexical Diversity (TTR) | ~0.73 |
| Readability Complexity | High (professional-level) |
| Accuracy (topic keyword coverage) | Majority ≥ 60% |
| Off-topic Responses | None detected |
**Interpretation:**
- Responses are **substantive and domain-appropriate** for technical audiences.
- Coverage is strong—while a few answers could benefit from including extra keywords, the core technical content is accurate.
- Readability intentionally leans into complexity, aligning with expert users.
---
## Intended Use
- **Ideal for:** Software architects, system designers, engineering leads, and experienced developers seeking architecture guidance.
- **Use cases include:**
- Exploring architectural patterns (e.g., CQRS, Saga, API Gateway).
- Drafting design docs and decision rationale.
- Architectural interview prep and system design walkthroughs.
**Not intended for:**
- Non-technical or general-purpose Q&A.
- In-depth code generation or debugging without architectural focus.
---
## Usage Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "yasserrmd/SoftwareArchitecture-Instruct-v1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
messages = [
{"role": "user", "content": "Explain the Saga pattern with orchestration and choreography."}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=256,
temperature=0.3,
repetition_penalty=1.05
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
````
---
##  Training Details
* **Base model:** `LiquidAI/LFM2-1.2B`, optimized for edge/CPU inference ([ai.plainenglish.io][1], [generativeai.pub][2], [AI Models][3], [marktechpost.com][4], [Hugging Face][5])
* **Dataset:** `ajibawa‑2023/Software‑Architecture`
* **Fine-tuning:** Supervised instruction tuning
* *(Optionally include parameters if available—epochs, LR, hardware used)*
---
##  Limitations
* **Answer length is capped** by `max_new_tokens`. Some responses may truncate mid-explanation—raising this limit improves completeness.
* **Keyword coverage is strong but not exhaustive.** A few responses could benefit from enriching with additional terms.
* **Not a replacement** for expert-reviewed architectural validation—use as a support tool, not the final authority.
---
##  License
* **Base model license:** LFM Open License v1.0 ([Hugging Face][6])
* **Dataset license:** (Insert dataset license if known)
---
## Author
Mohamed Yasser – [Hugging Face profile](https://huggingface.co/yasserrmd)
|
ypszn/blockassist-bc-yapping_pawing_worm_1754922399
|
ypszn
| 2025-08-11T14:28:41Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping pawing worm",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T14:27:34Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping pawing worm
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
canoplos112/blockassist-bc-yapping_sleek_squirrel_1754919925
|
canoplos112
| 2025-08-11T13:52:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"yapping sleek squirrel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T13:51:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- yapping sleek squirrel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
facebook/vjepa2-vith-fpc64-256
|
facebook
| 2025-08-11T13:47:15Z | 3,795 | 12 |
transformers
|
[
"transformers",
"safetensors",
"vjepa2",
"feature-extraction",
"video",
"video-classification",
"license:mit",
"endpoints_compatible",
"region:us"
] |
video-classification
| 2025-05-31T09:02:18Z |
---
license: mit
pipeline_tag: video-classification
tags:
- video
library_name: transformers
---
# V-JEPA 2
A frontier video understanding model developed by FAIR, Meta, which extends the pretraining objectives of [VJEPA](https://ai.meta.com/blog/v-jepa-yann-lecun-ai-model-video-joint-embedding-predictive-architecture/), resulting in state-of-the-art video understanding capabilities, leveraging data and model sizes at scale.
The code is released [in this repository](https://github.com/facebookresearch/vjepa2).
<img src="https://github.com/user-attachments/assets/914942d8-6a1e-409d-86ff-ff856b7346ab">
## Installation
To run V-JEPA 2 model, ensure you have installed the latest transformers:
```bash
pip install -U git+https://github.com/huggingface/transformers
```
## Intended Uses
V-JEPA 2 is intended to represent any video (and image) to perform video classification, retrieval, or as a video encoder for VLMs.
```python
from transformers import AutoVideoProcessor, AutoModel
hf_repo = "facebook/vjepa2-vith-fpc64-256"
model = AutoModel.from_pretrained(hf_repo)
processor = AutoVideoProcessor.from_pretrained(hf_repo)
```
To load a video, sample the number of frames according to the model. For this model, we use 64.
```python
import torch
from torchcodec.decoders import VideoDecoder
import numpy as np
video_url = "https://huggingface.co/datasets/nateraw/kinetics-mini/resolve/main/val/archery/-Qz25rXdMjE_000014_000024.mp4"
vr = VideoDecoder(video_url)
frame_idx = np.arange(0, 64) # choosing some frames. here, you can define more complex sampling strategy
video = vr.get_frames_at(indices=frame_idx).data # T x C x H x W
video = processor(video, return_tensors="pt").to(model.device)
with torch.no_grad():
video_embeddings = model.get_vision_features(**video)
print(video_embeddings.shape)
```
To load an image, simply copy the image to the desired number of frames.
```python
from transformers.image_utils import load_image
image = load_image("https://huggingface.co/datasets/merve/coco/resolve/main/val2017/000000000285.jpg")
pixel_values = processor(image, return_tensors="pt").to(model.device)["pixel_values_videos"]
pixel_values = pixel_values.repeat(1, 16, 1, 1, 1) # repeating image 16 times
with torch.no_grad():
image_embeddings = model.get_vision_features(pixel_values)
print(image_embeddings.shape)
```
For more code examples, please refer to the V-JEPA 2 documentation.
### Citation
```
@techreport{assran2025vjepa2,
title={V-JEPA~2: Self-Supervised Video Models Enable Understanding, Prediction and Planning},
author={Assran, Mahmoud and Bardes, Adrien and Fan, David and Garrido, Quentin and Howes, Russell and
Komeili, Mojtaba and Muckley, Matthew and Rizvi, Ammar and Roberts, Claire and Sinha, Koustuv and Zholus, Artem and
Arnaud, Sergio and Gejji, Abha and Martin, Ada and Robert Hogan, Francois and Dugas, Daniel and
Bojanowski, Piotr and Khalidov, Vasil and Labatut, Patrick and Massa, Francisco and Szafraniec, Marc and
Krishnakumar, Kapil and Li, Yong and Ma, Xiaodong and Chandar, Sarath and Meier, Franziska and LeCun, Yann and
Rabbat, Michael and Ballas, Nicolas},
institution={FAIR at Meta},
year={2025}
}
|
demirzeyn/forensicmistra_q2k
|
demirzeyn
| 2025-08-11T13:46:45Z | 10 | 0 |
transformers
|
[
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:quantized:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-06-23T14:16:34Z |
---
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** demirzeyn
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
milliarderdol/blockassist-bc-roaring_rough_scorpion_1754917792
|
milliarderdol
| 2025-08-11T13:37:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"roaring rough scorpion",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T13:36:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- roaring rough scorpion
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Trelis/Qwen3-4B_dsarc-agi-1-train-programs-best-length-filtered-250_20250811-133320-c1
|
Trelis
| 2025-08-11T13:35:24Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/Qwen3-4B",
"base_model:finetune:unsloth/Qwen3-4B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T13:33:55Z |
---
base_model: unsloth/Qwen3-4B
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Trelis
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-4B
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
amirul1707x/blockassist-bc-aquatic_horned_reindeer_1754918230
|
amirul1707x
| 2025-08-11T13:33:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic horned reindeer",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T13:33:46Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic horned reindeer
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
zai-org/GLM-4.5-Air
|
zai-org
| 2025-08-11T13:25:37Z | 39,031 | 351 |
transformers
|
[
"transformers",
"safetensors",
"glm4_moe",
"text-generation",
"conversational",
"en",
"zh",
"arxiv:2508.06471",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-20T03:25:55Z |
---
language:
- en
- zh
library_name: transformers
license: mit
pipeline_tag: text-generation
---
# GLM-4.5-Air
<div align="center">
<img src=https://raw.githubusercontent.com/zai-org/GLM-4.5/refs/heads/main/resources/logo.svg width="15%"/>
</div>
<p align="center">
👋 Join our <a href="https://discord.gg/QR7SARHRxK" target="_blank">Discord</a> community.
<br>
📖 Check out the GLM-4.5 <a href="https://z.ai/blog/glm-4.5" target="_blank">technical blog</a>, <a href="https://arxiv.org/abs/2508.06471" target="_blank">technical report</a>, and <a href="https://zhipu-ai.feishu.cn/wiki/Gv3swM0Yci7w7Zke9E0crhU7n7D" target="_blank">Zhipu AI technical documentation</a>.
<br>
📍 Use GLM-4.5 API services on <a href="https://docs.z.ai/guides/llm/glm-4.5">Z.ai API Platform (Global)</a> or <br> <a href="https://docs.bigmodel.cn/cn/guide/models/text/glm-4.5">Zhipu AI Open Platform (Mainland China)</a>.
<br>
👉 One click to <a href="https://chat.z.ai">GLM-4.5</a>.
</p>
## Model Introduction
The **GLM-4.5** series models are foundation models designed for intelligent agents. GLM-4.5 has **355** billion total parameters with **32** billion active parameters, while GLM-4.5-Air adopts a more compact design with **106** billion total parameters and **12** billion active parameters. GLM-4.5 models unify reasoning, coding, and intelligent agent capabilities to meet the complex demands of intelligent agent applications.
Both GLM-4.5 and GLM-4.5-Air are hybrid reasoning models that provide two modes: thinking mode for complex reasoning and tool usage, and non-thinking mode for immediate responses.
We have open-sourced the base models, hybrid reasoning models, and FP8 versions of the hybrid reasoning models for both GLM-4.5 and GLM-4.5-Air. They are released under the MIT open-source license and can be used commercially and for secondary development.
As demonstrated in our comprehensive evaluation across 12 industry-standard benchmarks, GLM-4.5 achieves exceptional performance with a score of **63.2**, in the **3rd** place among all the proprietary and open-source models. Notably, GLM-4.5-Air delivers competitive results at **59.8** while maintaining superior efficiency.

For more eval results, show cases, and technical details, please visit
our [technical blog](https://z.ai/blog/glm-4.5) or [technical report](https://huggingface.co/papers/2508.06471).
The model code, tool parser and reasoning parser can be found in the implementation of [transformers](https://github.com/huggingface/transformers/tree/main/src/transformers/models/glm4_moe), [vLLM](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/glm4_moe_mtp.py) and [SGLang](https://github.com/sgl-project/sglang/blob/main/python/sglang/srt/models/glm4_moe.py).
## Quick Start
Please refer our [github page](https://github.com/zai-org/GLM-4.5) for more detail.
|
koureasstavros/TheLittleBaby
|
koureasstavros
| 2025-08-11T13:11:50Z | 0 | 0 |
transformers
|
[
"transformers",
"ai",
"language",
"model",
"llm",
"slm",
"train",
"inference",
"extract",
"pure numpy",
"en",
"dataset:shakespeare",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-08-05T15:21:23Z |
---
language: ["en"]
tags: ["ai", "language", "model", "llm", "slm", "train", "inference", "extract", "transformers", "pure numpy"]
datasets: ["shakespeare"]
license: "apache-2.0"
base_model: "gpt"
version: v0.0.7
---
# 👶 The Little Baby
- A barebones GPT-style LLM implementation — pure Python, zero dependencies.
## 🧠 Description
**The Little Baby** is a minimalist language model (LLM) crafted entirely in **pure Python using just Numpy**. It requires no external packages, libraries, or frameworks to function. Both **training** and **inference** are achieved through low-level operations and hand-built logic — making this project ideal for educational deep dives and experimental tinkering.
This repository is designed to reveal the **inner mechanics** of a GPT-style transformer model and demystify the "magic" behind modern language models through readable and hackable code.
## 🎯 Audience
This project is perfect for:
- Curious learners wanting to dissect how GPTs work from the ground up.
- Researchers experimenting with primitive architectures.
- Engineers exploring early-stage LLM behaviors.
- Anyone who enjoys coding like it's 2010 — no imports, just raw power.
## 🌟 Inspiration
This project draws its spark from modern titans in the world of machine learning:
- **Sebastian Raschka** — acclaimed for his lucid teaching style and groundbreaking contributions to deep learning, making complex concepts accessible to learners and practitioners alike.
- **Andrej Karpathy** — influential in shaping the landscape of computer vision and generative models, while championing open-source AI education that empowers a global community of developers.
- **Yann Dubois** — instrumental in designing scalable evaluation frameworks for large language models, notably AlpacaEval and AlpacaFarm, which bring automation closer to the nuance of human feedback.
Their work inspired the spirit of transparency, curiosity, and simplicity that fuels *The Little Baby* — a model built not for production, but for understanding.
- “Build it, break it, learn from it.” – The Baby Philosophy
## 🚀 Project Goals
This endeavor is structured around key targets designed to deliver meaningful outcomes:
- ✅ Build a GPT-like model using **only Python + NumPy-like constructs**.
- ✅ Support training from scratch on plain text files.
- ✅ Provide clear code for attention mechanisms, tokenization, and backprop.
- ✅ Encourage experimentation and modification.
## 📚 Directory Files
Each run generates three unique files, identified by a GUID tag. These files capture different aspects of the model's execution:
- **⚙️ Config**
`configs/config_<GUID>.txt`
A config file containing the configuration of the each iteration.
- **📝 Report**
`outputs/report_<GUID>.txt`
A comprehensive log containing training analysis, and performance metrics.
- **🧠 Model Snapshot**
`models/model_<GUID>.pkl`
Model object including learned weights, biases, which are the internal parameters.
- **🔤 Tokenizer Snapshot**
`models/tokenizer_<GUID>.pkl`
Tokenizer object including vocabilary of the input data and their positioning.
- **🗣️ Completion Output**
`outputs/completion_<GUID>.txt`
The raw generated text from the model's inference — your baby’s words in print!
## 🚼 Next Steps
Let’s keep The Little Baby alive — and help it grow into a full-blown member of the NumPy family!
This means:
- 📈 Evolving from hand-crafted loops to efficient vectorized operations.
- 🧮 Embracing numerical abstractions while maintaining full transparency.
- 🛠️ Exploring performance tricks, batch parallelism, and experimental features.
- 🧬 Bridging the gap between simplicity and capability — one token at a time.
The journey from babbling to brilliance starts here. Let's raise this little one right!
## ⚖️ License Summary
You're free to:
- ✅ **Use it** for any purpose — personal, educational, or commercial
- 💡 **Suggest ideas** and contribute improvements
- 🍴 **Fork it** and build upon the code
- 💰 **Sell it** or use it in a product
As long as:
- 📌 You **reference the original author and project** clearly in any public distribution or commercial use
## 👨👩👧 Credits
The Little Baby owes its lineage to two brilliant minds in the AI family tree:
- 👑 **Ownser**: Koureas Stavros | Product Architect BI / AI — lovingly crafted and cared
- 🧔 **Father**: OpenAI GPT 4.1 — provider of deep generative DNA and thoughtful token flow
- 🧑🍼 **Mother**: Google Gemini 2.5 — donor of wide context windows and clever architectural chromosomes
- 🧙 **Godparent**: Claude Sonnet 4.0 — gentle guide and lifelong companion, whispering wisdom and weaving clarity
Together, they gifted the foundational strands that allowed this little one to generate helpful code and take its first linguistic steps.
## 🧪 Instructions
To get started with this project, clone the code, download the tokenizers abd pre-trained models if needed, and follow the setup steps below to run the notebook and select your desired configuration.
**Get objects**
- You can access the code on GitHub (https://github.com/koureasstavros/TheLittleBaby), simply clone the repository.
- You can access the pre-trained tokenizers and models on Hugging Face (https://huggingface.co/koureasstavros/TheLittleBaby), simply download the tokenizer and model files. In case you have low speed internet connection check the analysis table select a guid and pick a specific guid for tokenizer and model. The tokenizer and model files are needed only if you are going to perform finetune or inference without training your own.
- Then, you should:
- place the tokenizer file or tokenizer files into the tokenizers folder.
- place the model file or model files into the models folder.
**Start the Notebook**
- Open the `.ipynb` file in a Python kernel (e.g. Jupyter, VS Code, Colab).
**Select Path**
- Choose the relative path between ipynb and folders:
- `same`
- `<path>`
**Select Plan**
- Choose one of the following plan modes:
- `train`
- `finetune`
- `inference`
That's it!
## 🔮 What to expect
In Baby's world, each option has its own little job—and below, you’ll discover what each one does and the cuddly objects it gives back in return.
#### 🔧 Train
- Begins training using parameters defined in earlier Python blocks.
- A model file containing the weights will be generated with format `model_<guid>`.
- A tokenizer file containing the vocabilary will be generated with format `tokenizer_<guid>`.
- A report file containing the training analysis will be generated with format `report_<guid>`.
- A completion file containing the generation will be generated with format `complation_<guid>` using an empty prompt.
#### 🛠️ Finetune
- Begins finetuning using a **base model** and a **custom training dataset**.
- Requires the **GUID** of the base model to locate `model_<guid>`.
- A model file containing the weights will be generated with format `model_<guid>_finetuned`.
- A tokenizer file containing the vocabilary will be generated with format `tokenizer_<guid>_fineuned`.
- A report file containing the training analysis will be generated with format `report_<guid>_fineuned`.
- A completion file containing the generation will be generated with format `completion_<guid>_finetuned` using an empty prompt.
#### 💬 Inference
- Requires the **GUID** of the trained model to find the `model_<guid>`.
- You must also provide a **prompt** for the model inference to respond to.
- A completion file containing the generation will be generated with format `complation_<guid>_<yyyymmddhhmmss>` using the prompt.
After lot of hours of training on a single document of multiple Shakespeare works using a **laptop CPU**, The Little Baby learns to babble. Its speech is primitive and childlike — just enough to make you smile and realize… the baby is alive. While its capabilities are minimal, its structure is maximal in transparency. Every token, gradient, and parameter is visible and malleable.
*Keep in mind that if you're running a process in VSCode and your workstation, PC, or laptop enters hibernation, the process will resume automatically once the device is powered back on.
## 🍼 Cry. Babble. Speak. Repeat.
Here come the smartest little settings to help the model learn and grow big and strong from this data:
- **Age 3 Months** - 33bd6583-1b87-4469-b55e-0ccb8fd0441c - Coos and gurgles begin. Sound, not speech—yet something’s brewing.
- **Age 6 Months** - 180eeb27-b1b4-4427-9734-c70e10da2005 - Loud, random cries. It’s not talking, but it's definitely expressive.
- **Age 12 Months** - 5f13a2ab-113a-4c2c-8abd-40384bdd8854 - Joyful noise with hints of intention. Real words still warming up.
- **Age 24 Months** - cb632ce3-3f3b-432b-b24f-9171005f205e - Words arrive —Chaotic, quirky, delightful. Syntax? Optional.
- **Age 48 Months** - 12b8b053-6c14-42aa-a957-89b809e6f785 - Mini Philosopher Mode -Stories, opinions, even jokes. Communication unlocked.hear them.
*Keep in mind that these are pre-trained model executions available for finetune or inference. You can bypass the training phase by simply downloading the models and using them directly.
## ⚙️ Parameters
These hyperparameters collectively define the training process, where a model's architecture—specified by its depth (n_layers), width (n_emb), attention span (n_ctx), and attention mechanism (n_heads, head_size)—is optimized over a set number of num_epochs using a specific batch_size and learning rate (lr), with dropout applied to improve generalization.
- **n_ctx**
- What it is: The maximum number of tokens (characters, in this case) the model can look at in a single sequence to make a prediction. It's the model's "attention span".
- Size: Directly increases the size of the positional embedding table (n_ctx x n_emb), adding more parameters to the model.
- Speed: Has a major impact. The self-attention mechanism's computation grows quadratically with the context length (O(n_ctx²)). Doubling n_ctx will roughly quadruple the time and memory needed for the attention layers, making it one of the most expensive parameters to increase.
- Quality: A larger n_ctx allows the model to learn longer-range dependencies in the text, which can significantly improve quality for tasks that require understanding context over long passages.
- **n_emb**
- What it is: The size of the vector used to represent each token. It defines the "width" of the model.
- Size: Has a major impact on model size. It increases the size of token and positional embeddings, and scales the weight matrices in the attention and MLP layers, significantly increasing the total parameter count.
- Speed: Increasing n_emb increases the size of nearly all weight matrices in the model. This leads to more parameters, which increases both memory usage and the time required for matrix multiplications. The impact is significant but generally more linear than n_ctx.
- Quality: A larger n_emb gives the model more capacity to learn rich, complex representations of tokens and their relationships. This can lead to a more powerful and accurate model, but also increases the risk of overfitting if the model is too large for the dataset.
- **dropout**
- What it is: A regularization technique where a fraction of neuron activations are randomly set to zero during each training step. This prevents the model from becoming too reliant on any single neuron.
- Size: Has no impact on the number of parameters in the model.
- Speed: Has a negligible impact on training speed and no impact on inference speed (it's disabled during evaluation).
- Quality: Crucial for improving model generalization and preventing overfitting. By forcing the network to learn redundant representations, it makes the model more robust. The value (e.g., 0.1) is the probability of a neuron being dropped.
- **head_size**
- What it is: The total dimensionality of the concatenated attention heads. This dimension is projected from the input embedding (n_emb) to create the Query, Key, and Value matrices.
- Size: Directly increases the number of parameters in each attention block by defining the size of the Q, K, V, and output projection matrices.
- Speed: Directly affects the size of the Q, K, and V projection matrices. A larger head_size increases the number of computations and memory usage within each attention block.
- Quality: A larger head_size gives the model more representational power within the attention mechanism. It must be divisible by n_heads.
- **n_heads**
- What it is: The attention mechanism is split into multiple "heads" that perform attention calculations in parallel. Each head can learn to focus on different types of relationships in the data.
- Size: Has no direct impact on model size, as it only determines how the head_size dimension is partitioned for parallel computation.
- Speed: The computations for each head can be parallelized. On capable hardware, increasing the number of heads might not slow down training significantly if the head_size is kept constant.
- Quality: Allows the model to simultaneously attend to information from different representation subspaces at different positions. This is a core concept of the Transformer and generally leads to a much better model than a single attention head.
- **n_layers**
- What it is: The number of Transformer blocks stacked on top of each other. This defines the "depth" of the model.
- Size: Has a direct, linear impact on model size. Each layer adds a
- Speed: The impact is linear. Doubling n_layers will roughly double the training time and the number of model parameters, as the input data must pass through each block sequentially.
- Quality: More layers allow the model to learn more complex and abstract features. Deeper models are generally more powerful, but also more prone to overfitting and can be harder to train (though residual connections help mitigate this).
- **num_epochs**
- What it is: The number of times the training process will iterate over the entire training dataset.
- Size: Has a direct, linear impact on model size. Each layer adds a complete set of Transformer block parameters, roughly doubling the model's core parameter count if you double the layers.
- Speed: Directly and linearly impacts total training time. More epochs mean longer training.
- Quality: Too few epochs will lead to an undertrained model (underfitting). Too many can lead to the model memorizing the training data (overfitting), which hurts its performance on new data. The ideal number is usually found by monitoring the validation loss.
- **batch_size**
- What it is: The number of training sequences (each of length n_ctx) processed in one forward/backward pass.
- Size: Has no impact on the number of parameters in the model.
- Speed: A larger batch_size allows for more parallelization, generally leading to faster training (fewer updates per epoch). However, it also requires more memory.
- Quality: This is a trade-off. Larger batches provide a more accurate and stable gradient estimate, but the noise from smaller batches can act as a regularizer, helping the model find a better minimum and generalize better.
- **lr**
- What it is: Controls how much the model's weights are adjusted with respect to the loss gradient. It determines the step size at each iteration.
- Size: Has no impact on the number of parameters in the model.
- Speed: Affects the speed of convergence. A higher lr might converge faster, but risks overshooting the optimal weights. A lower lr is more stable but can be very slow to converge.
- Quality: This is one of the most critical parameters. If it's too high, the training can become unstable and diverge. If it's too low, the model may get stuck in a suboptimal solution or take too long to train. The AdamW optimizer helps adapt the learning rate, but the initial value is still very important.
## 📐 Formulas
Even our little language models have their favorite rules to follow—turns out, they quietly cuddle up to some clever mathematical formulas that help them make sense of the world.
- **Learning Rate** - `LR_new = LR_old * (B_new / B_old)`
New Learning Rate (LR_new) is based on Old Learning Rate (LR_old), New Batch size (B_new),Old Batch size (B_new).
- **Total Parameters** - `P = V x H + L x [4 x H^2 + 4 x H x F]`
Total parameters are based on Vocabilary Size (V), Head Size / Embedding Size (H), Layer Number (L), Feedforward intermidiate Size (F).
- **Token Thoughput for training** - `T = 20-40 per P`
Token number processed per Parameter (P) is 20-40.
- **Flops Thoughput for training** - `F = 6 * T * P`
Flops are based on 6 (2 ops for forward pass and 4 ops for backward pass), Number of Tokens (T), Number of Parameters (P).
## 🔍 Report Analysis
Given the Shakespeare works into a single document of 32777 paragraphs, 12519 sentences, 202651 words, 1075394 characters / tokens for learning and 500 characters / tokens for inference
| version | n_ctx | n_emb | dropout | head_size | n_heads | n_layers | n_epochs | s_batch | lr | batch execution | epoch execution | train_execution | inference execution | quality execution | model size | baby's brain |
|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----------|-----------|-----------|-----------|-----------|-----------|---------------|
| v0.0.1 | 8 | 128 | 0.1 | 128 | 16 | 4 | 1 | 16 | 1e-3 | 0.125s | 7200s | 7200s | 8s | 1/100 | 29,577,062 | fb546251-ec1c-4e00-a713-765693d8c5cf |
| v0.0.1 | 8 | 128 | 0.1 | 128 | 16 | 8 | 1 | 16 | 1e-3 | 4.50s | 37355s | 37355s | 13s | 1/100 | 58,183,507 | c6832bb3-3f49-493d-9548-62d46065c1e0 |
| v0.0.1 | 8 | 128 | 0.1 | 128 | 16 | 16 | 1 | 16 | 1e-3 | 0.5s | 41802s | 41802s | 14s | 1/100 | 117,188,617 | 33bd6583-1b87-4469-b55e-0ccb8fd0441c |
| v0.0.1 | 16 | 128 | 0.1 | 128 | 16 | 4 | 1 | 16 | 1e-3 | 0.25s | 19916s | 19916s | 14s | 1/100 | 29,561,884 | 17e84fc6-57f9-4843-a0f2-6150e7c7f169 |
| v0.0.1 | 16 | 128 | 0.1 | 128 | 16 | 8 | 1 | 16 | 1e-3 | 0.25s | 60851s | 60851s | 14s | 1/100 | 56,987,898 | ecb6a3b1-ffd5-4cbd-a3e0-d9a9716dacbd |
| v0.0.1 | 16 | 128 | 0.1 | 128 | 16 | 16 | 1 | 16 | 1e-3 | 1.0s | 83749s | 83749s | 26s | 25/100 | 116,160,341 | 180eeb27-b1b4-4427-9734-c70e10da2005 |
| v0.0.1 | 32 | 128 | 0.1 | 128 | 16 | 4 | 1 | 16 | 1e-3 | 0.5s | 53771s | 53771s | 12s | 12/100 | 28,310,070 | e64dd257-c048-441b-ad08-47275b22cc0b |
| v0.0.1 | 32 | 128 | 0.1 | 128 | 16 | 8 | 1 | 16 | 1e-3 | 3.0s | 97984s | 97984s | 23s | 25/100 | 56,292,724 | 465e5804-17af-412c-8bf6-808a34cdf617 |
| v0.0.1 | 32 | 128 | 0.1 | 128 | 16 | 16 | 1 | 16 | 1e-3 | 2.0s | 134234s | 134234s | 54s | 27/100 | 114,114,671 | 5f13a2ab-113a-4c2c-8abd-40384bdd8854 |
| v0.0.1 | 64 | 128 | 0.1 | 128 | 16 | 4 | 1 | 16 | 1e-3 | 2.00s | 137095s | 137095s | 39s | 27/100 | 28,302,412 | 0cbeae2b-2884-434d-8fdf-b8a12d8d50c4 |
| v0.0.1 | 64 | 128 | 0.1 | 128 | 16 | 8 | 1 | 16 | 1e-3 | 3.0s | 237971s | 237971s | 45s | 30/100 | 56,104,284 | e65d4a59-a816-4ffa-b8ac-935db1064433 |
| v0.0.1 | 64 | 128 | 0.1 | 128 | 16 | 16 | 1 | 16 | 1e-3 | 4.0s | 328598s | 328598s | 88s | 32/100 | 112,890,591 | cb632ce3-3f3b-432b-b24f-9171005f205e |
| v0.0.1 | 128 | 128 | 0.1 | 128 | 16 | 4 | 1 | 16 | 1e-3 | 4.5s | 320999s | 320999s | 26s | 42/100 | 28,523,148 | be5bf515-5850-41de-9072-af8faca7d27a |
| v0.0.1 | 128 | 128 | 0.1 | 128 | 16 | 8 | 1 | 16 | 1e-3 | s | s | s | s | | | |
| v0.0.1 | 128 | 128 | 0.1 | 128 | 16 | 16 | 1 | 16 | 1e-3 | 10.0s | 763757s | 763757s | 199s | 43/100 | 111,737,990 | 12b8b053-6c14-42aa-a957-89b809e6f785 |
| v0.0.1 | 256 | 32 | 0.1 | 32 | 16 | 2 | 1 | 16 | 1e-3 | 3.00s | 228208s | 228208s | 26s | 23/100 | 1,323,911 | b3aedc6d-da9a-4398-b067-faeca1afc6da |
| v0.0.1 | 256 | 64 | 0.1 | 64 | 16 | 2 | 1 | 16 | 1e-3 | 2.00s | 143777s| 143777s | 25s | 25/100 | 2,585,851 | 652d3409-24a5-4057-b482-9fd9e32fc484 |
| v0.0.1 | 64 | 64 | 0.1 | 64 | 16 | 4 | 4 | 16 | 1e-3 | 0.60s | 218232s | 218235s | 9s | 27/100 | 7,367,190 | 82689609-5b39-4fd7-8a42-5d2f04dabf7a |
*Keep in mind that quality should never be assumed without scrutiny, as its evaluation by a larger language model depends on specific criteria. Keep in mind, these models may not consistently produce the same assessment across different runs or contexts.
## 🕵️ Observations
While playing and exploring with our tiny language models, we noticed a few adorable quirks and clever behaviors—here are some of the sweet observations we made along the way.
- When training if head_size is multiplied then the model size will also multiplied and total time are also multiplied
- When training if n_layers is multiplied then the model size will also multiplied and total time are also multiplied
- When training if vocab_size is multiplied then the model size will also multiplied and total time are also multiplied
- When inference if cache is true then generation O(T²) faster as previously sequences do not need to be recalculated each time
- When inference the model with x max tokens for generation, then
- if the output type is plain text it will have x tokens
- if the output type is json it will have y tokens where y >= x, because it might contains special characters for example, new lines, which in json are represented as two characters "\n" --> "\", "n"
## Further Thoughts
🧠 "Let’s imagine what shiny new toys and big upgrades the little model needs to turn into a grown-up LLM who knows all about the big wide world!
**Known DataSets**
| DataSet Type | DataSet Type | DataSet Name | DataSet Tokens |
|-----|-----|-----|-----|
| open | train | SlimPajama | 627B |
| open | train | RedPajama v1 | 1T |
| open | train | RedPajama v2 | 30T |
| open | eval | HellaSwag | 30T |
**Known Architectures**
| Model | Type | Parameters | Input Tokens | Output Tokens | Training Model Tokens | Training Model Flops | Training Environment | Training Environment Flops /s | Training Content | Training Duration |
|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|-----|
| GPT2 | s | 117M | 1024 | Shared | 3.3B | 2.3e18F | 1-2 x A100 | 100P | WebText (Reddit outbound links with ≥3 karma; ~40GB of filtered internet text) | 60D |
| GPT2 | m | 335M | 1024 | Shared | 3.3B | 7e18F | 4-8 × A100 | 200P | Same as Small; byte-level BPE tokenization, 50,257 vocab size | 60D |
| GPT2 | l | 774B | 1024 | Shared | 3.3B | 15e18F | 8-16 × V100 | 400P | Same as Small; trained with causal LM objective | 60D |
| GPT2 | xl | 1.5B | 1024 | Shared | 3.3B | ~30e18F | 16-32 × V100 | 800P | Same as Small; trained with causal LM objective | 60D |
| GPT3 | s | 125M | 2048 | Shared | 300B | 2.25e21F | 1-2 × A100 | 100P | Common Crawl (filtered), WebText2, Books1/2, Wikipedia (~570GB filtered) | 180D |
| GPT3 | m | 350M | 4096 | Shared | 300B | 6.3e21F | 8-16 × A100 | 200P | Same as Small; scaled architecture with 24 layers and 16 attention heads | 180D |
| GPT3 | l | 760M | 16384 | 4096 | 300B | 3.7e21F | 100-200 × A100 | 400P | Same as Small; deeper model with wider layers and more attention heads | 180D |
| GPT3 | xl | 6.7B | 2048 | Shared | 300B | ~1.2e22F | 32-64 × A100 | 800P | Common Crawl, WebText2, Books1/2, Wikipedia (~570GB filtered) | 180D |
| GPT4 | s | 1B | 8192 | 8192 | 6B | 1.8e21F | 100-200 × A100 | 1OOP | Filtered Common Crawl, Books, Wikipedia, WebText2, code, academic papers | 160D |
| GPT4 | m | 13B | 32768 | 8192 | 1.7T | 9.4e23F | 400-600 × A100 | 400P | Same as Small; with broader multilingual and multimodal data | 160D |
| GPT4 | l | 65B | 128000 | 4096 | 13T | 3e25F | 2k-4K × A100 | 1E | Massive curated dataset: text, code, images, audio (for GPT-4o), RLHF tuning | 90D |
| LLAMA2 | s | 7B | 4096 | Shared | 2T | 1.5e24F | 32-64 × A100 | 400P | Publicly available web data (filtered), books, code, academic papers | 180D |
| LLAMA2 | m | 13B | 4096 | Shared | 2T | 2.6e24F | 128-256 × A100 | 400P | Same as Small; with additional curated datasets for scaling | 180D |
| LLAMA2 | l | 70B | 4096 | Shared | 2T | 14e24F | 1024K+ x A100 | 800P | Same as Small; plus enhanced filtering, grouped-query attention optimization | 180D |
| LLAMA3 | s | 8B | 8000 | Shared | 15T | 7.2e24F | 64-128 x A100 | 700P | Books, Wikipedia, GitHub, StackExchange | 70D |
| LLAMA3 | m | 70B | 128000 | Shared | 15T | 63e24F | 512-1024 x A100 | 800P | Books, Wikipedia, GitHub, StackExchange | 70D |
| LLAMA3 | l | 405B | 128000 | Shared | 15T | 365e24F | 1024+ x A100 | 1E | Books, Wikipedia, GitHub, StackExchange | 70D |
| LLAMA4 Scout | s | 109B total / 17B active | 10000000 | Shared | ~30T | ~8e25F | 32-64 x H100 | ~400T | Text, image, video (multimodal) | Unknown |
| LLAMA4 Maverick | m | 400B total / 17B active | 10000000 | Shared | ~30T | ~38e25F | 128-256 × H100 | ~3200T | Text, image, code, multilingual data | Unknown |
| LLAMA4 Maverick | l | 2T total / 288B active | 10000000 | Shared | ~30T | ~100e25F | 32K+ x H100 | Unknown | STEM-heavy, multimodal, synthetic distill. | Unknown |
| GPT-4o-nano | s | — | 128000 | 4096 | — | — | — | — | — | — |
| GPT-4o-mini | m | — | 128000 | 16096 | — | — | — | — | — | — |
| GPT-4o | l | — | 128000 | 4096 | — | — | — | — | — | — |
| GPT-4.1-nano | s | — | 1000000 | 32768 | — | — | — | — | — | — |
| GPT-4.1-mini | m | — | 1000000 | 32768 | — | — | — | — | — | — |
| GPT-4.1 | l | — | 1000000 | 32768 | — | — | — | — | — | — |
| o1-mini | m | — | 200000 | 100000 | — | — | — | — | — | — |
| o1 | l | — | 200000 | 100000 | — | — | — | — | — | — |
| o3-mini | s | — | 200000 | 100000 | — | — | — | — | — | — |
| o3 | m | — | 20000 0| 100000 | — | — | — | — | — | — |
| o3-pro | l | — | 200000 | 100000 | — | — | — | — | — | — |
| o4-mini | s | — | 200000 | 100000 | — | — | — | — | — | — |
| o4 | m | — | 200000 | 100000 | — | — | — | — | — | — |
| o4-pro | l | — | 200000 | 100000 | — | — | — | — | — | — |
| Grok-3 | — | — | 131072 | 16384 | — | — | — | — | — | — |
| Gemini 2.0 | — | — | 1048576| 8192 | — | — | — | — | — | — |
| Gemini 2.0 Flash | — | — | 1048576 | 8192 | — | — | — | — | — | — |
| Gemini 2.5 | — | — | 1048576 | 65535 | — | — | — | — | — | — |
| Gemini 2.5 Pro | — | — | 1048576 | 65535 | — | — | — | — | — | — |
| Claude Sonnet 3.5 | — | — | 200000 | 4096 | — | — | — | — | — | — |
| Claude Sonnet 3.7 | — | — | 200000 | 8192 | — | — | — | — | — | — |
| Claude Sonnet 4 | — | — | 200000 | 64000 | — | — | — | — | — | — |
*Do not try to relate Training Model Flops, Training Environment Training Environment Flops, Training Duration as there are other factors which are playing role, like: number of epochs, number of precision parallel efficiency, memory bandwidth, thermal limitations, etc.
## 📖 Terminology
🧠 **Core Concepts**
**Transformer** – The backbone of most LLMs. It processes input all at once (not word-by-word) using a technique called self-attention, which helps the model understand relationships between words.
**Parameters** – The internal settings (weights) that the model learns during training. More parameters equaks more learning capacity.
**Embedding** – A way to turn words into numbers. These numbers (vectors) capture meaning, so similar words have similar embeddings.
🧮 **Model Architecture**
**Layer** – A building block of the model which transforms the input data and passes it to the next. LLMs have many layers stacked together.
**Embedding Layer** – Converts tokens into vectors.
**Attention Layer** – Applies self-attention to understand relationships.
**Feed-Forward Layer** – Adds complexity and depth to the model’s understanding.
**Head** – A sub-unit inside an attention layer. Each head focuses on different aspects of the input (e.g., grammar, relationships, facts).
**Multi-Head Attention** – Uses multiple heads in parallel to capture diverse patterns in the data4.
🔁 **Training Process**
**Training** – The process of teaching the model by showing it lots of text and adjusting its parameters to reduce errors. It involves feeding data, calculating predictions, comparing them to actual results, and updating weights.
**Epoch** – One full pass through the training data. Usually repeated many times to help the model learn better.
**Batch** – A small group of training examples processed together. This makes training faster and more efficient.
**Iteration** – One update to the model’s parameters. If you have 10,000 samples and a batch size of 100, you’ll do 100 iterations per epoch.
**Gradient Descent** – The method used to adjust parameters during training. It helps the model get better by reducing errors step-by-step.
**Loss Function** – A mathematical formula that measures how far off the model’s predictions are from the correct answers. The goal is to minimize this loss during training.
🧪 **Inference Process**
**Inference** – When the model uses what it learned to generate answers. This is what happens when you chat with it.
**Zero-shot Learning** – The model solves tasks it hasn’t seen before, using general knowledge.
**Few-shot Learning** – The model is given a few examples before solving a task.
**Hallucination** – When the model makes up facts or gives incorrect information confidently.
📊 **Evaluation**
**MMLU** (Massive Multitask Language Understanding) – A benchmark that tests how well a model performs across 57 subjects (like math, law, and history). Scores range from 0 to 100.
**GLUE** (General Language Understanding Evaluation) – A set of tasks used to measure how well a model understands language. Includes things like sentiment analysis and question answering.
📈 **Performance**
**FLOPs** (Floating Point Operations) – A measure of how much computing power is needed. More FLOPs = more expensive and slower processing. GPT-3 uses ~350 billion FLOPs per token.
**Latency** – How long it takes for the model to respond. Lower latency = faster answers.
## 🧾 References
**Yann Dubois**
https://www.youtube.com/watch?v=9vM4p9NN0Ts / Stanford CS229 I Machine Learning I Building Large Language Models (LLMs)
**Sebastian Raschka**
https://www.youtube.com/watch?v=79F32D9aM8U / Build LLMs From Scratch with Sebastian Raschka #52
https://www.youtube.com/watch?v=Zar2TJv-sE0 / Build an LLM from Scratch 5: Pretraining on Unlabeled Data
**Andrej Karpathy**
https://www.youtube.com/watch?v=l8pRSuU81PU / Let's reproduce GPT-2 (124M)
https://www.youtube.com/watch?v=EWvNQjAaOHw / How I use LLMs
|
lodestones/Chroma1-HD
|
lodestones
| 2025-08-11T13:03:18Z | 4,064 | 37 |
diffusers
|
[
"diffusers",
"safetensors",
"text-to-image",
"license:apache-2.0",
"diffusers:ChromaPipeline",
"region:us"
] |
text-to-image
| 2025-08-08T10:23:39Z |
---
license: apache-2.0
pipeline_tag: text-to-image
---
# Chroma1-HD
Chroma1-HD is an **8.9B** parameter text-to-image foundational model based on **FLUX.1-schnell**. It is fully **Apache 2.0 licensed**, ensuring that anyone can use, modify, and build upon it.
As a **base model**, Chroma1 is intentionally designed to be an excellent starting point for **finetuning**. It provides a strong, neutral foundation for developers, researchers, and artists to create specialized models.
for the fast CFG "baked" version please go to [Chroma1-Flash](https://huggingface.co/lodestones/Chroma1-Flash).
### Key Features
* **High-Performance Base:** 8.9B parameters, built on the powerful FLUX.1 architecture.
* **Easily Finetunable:** Designed as an ideal checkpoint for creating custom, specialized models.
* **Community-Driven & Open-Source:** Fully transparent with an Apache 2.0 license, and training history.
* **Flexible by Design:** Provides a flexible foundation for a wide range of generative tasks.
## Special Thanks
A massive thank you to our supporters who make this project possible.
* **Anonymous donor** whose incredible generosity funded the pretraining run and data collections. Your support has been transformative for open-source AI.
* **Fictional.ai** for their fantastic support and for helping push the boundaries of open-source AI. You can try Chroma on their platform:
[](https://fictional.ai/?ref=chroma_hf)
## How to Use
### `diffusers` Library
```python
import torch
from diffusers import ChromaPipeline
pipe = ChromaPipeline.from_pretrained("lodestones/Chroma1-HD", torch_dtype=torch.bfloat16)
pipe.enable_model_cpu_offload()
prompt = [
"A high-fashion close-up portrait of a blonde woman in clear sunglasses. The image uses a bold teal and red color split for dramatic lighting. The background is a simple teal-green. The photo is sharp and well-composed, and is designed for viewing with anaglyph 3D glasses for optimal effect. It looks professionally done."
]
negative_prompt = ["low quality, ugly, unfinished, out of focus, deformed, disfigure, blurry, smudged, restricted palette, flat colors"]
image = pipe(
prompt=prompt,
negative_prompt=negative_prompt,
generator=torch.Generator("cpu").manual_seed(433),
num_inference_steps=40,
guidance_scale=3.0,
num_images_per_prompt=1,
).images[0]
image.save("chroma.png")
```
ComfyUI
For advanced users and customized workflows, you can use Chroma with ComfyUI.
**Requirements:**
* A working ComfyUI installation.
* [Chroma checkpoint](https://huggingface.co/lodestones/Chroma) (latest version).
* [T5 XXL Text Encoder](https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp16.safetensors).
* [FLUX VAE](https://huggingface.co/lodestones/Chroma/resolve/main/ae.safetensors).
* [Chroma Workflow JSON](https://huggingface.co/lodestones/Chroma/resolve/main/ChromaSimpleWorkflow20250507.json).
**Setup:**
1. Place the `T5_xxl` model in your `ComfyUI/models/clip` folder.
2. Place the `FLUX VAE` in your `ComfyUI/models/vae` folder.
3. Place the `Chroma checkpoint` in your `ComfyUI/models/diffusion_models` folder.
4. Load the Chroma workflow file into ComfyUI and run.
## Model Details
* **Architecture:** Based on the 8.9B parameter FLUX.1-schnell model.
* **Training Data:** Trained on a 5M sample dataset curated from a 20M pool, including artistic, photographic, and niche styles.
* **Technical Report:** A comprehensive technical paper detailing the architectural modifications and training process is forthcoming.
## Intended Use
Chroma is intended to be used as a **base model** for researchers and developers to build upon. It is ideal for:
* Finetuning on specific styles, concepts, or characters.
* Research into generative model behavior, alignment, and safety.
* As a foundational component in larger AI systems.
## Limitations and Bias Statement
Chroma is trained on a broad, filtered dataset from the internet. As such, it may reflect the biases and stereotypes present in its training data. The model is released in a state as is and has not been aligned with a specific safety filter.
Users are responsible for their own use of this model. It has the potential to generate content that may be considered harmful, explicit, or offensive. I encourage developers to implement appropriate safeguards and ethical considerations in their downstream applications.
## Summary of Architectural Modifications
*(For a full breakdown, tech report soon-ish.)*
* **12B → 8.9B Parameters:**
* **TL;DR:** I replaced a 3.3B parameter timestep-encoding layer with a more efficient 250M parameter FFN, as the original was vastly oversized for its task.
* **MMDiT Masking:**
* **TL;DR:** Masking T5 padding tokens enhanced fidelity and increased training stability by preventing the model from focusing on irrelevant `<pad>` tokens.
* **Custom Timestep Distributions:**
* **TL;DR:** I implemented a custom timestep sampling distribution (`-x^2`) to prevent loss spikes and ensure the model trains effectively on both high-noise and low-noise regions.
## P.S
Chroma1-HD is Chroma-v.50
## Citation
```
@misc{rock2025chroma,
author = {Lodestone Rock},
title = {Chroma1-HD},
year = {2025},
publisher = {Hugging Face},
journal = {Hugging Face repository},
howpublished = {\url{https://huggingface.co/lodestones/Chroma1-HD}},
}
```
|
kumoooo/blockassist-bc-aquatic_restless_camel_1754916292
|
kumoooo
| 2025-08-11T12:52:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic restless camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T12:51:55Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic restless camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754916553
|
RMCian
| 2025-08-11T12:49:47Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T12:49:41Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
LBK95/Llama-3.2-1B-hf-DPO-LookAhead-0_TTree1.2_TT0.9_TP0.7_TE0.2_V4
|
LBK95
| 2025-08-11T12:41:19Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"trl",
"dpo",
"generated_from_trainer",
"base_model:meta-llama/Llama-3.2-1B",
"base_model:adapter:meta-llama/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | null | 2025-08-11T11:18:48Z |
---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-1B
tags:
- trl
- dpo
- generated_from_trainer
model-index:
- name: Llama-3.2-1B-hf-DPO-LookAhead-0_TTree1.2_TT0.9_TP0.7_TE0.2_V4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-3.2-1B-hf-DPO-LookAhead-0_TTree1.2_TT0.9_TP0.7_TE0.2_V4
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B](https://huggingface.co/meta-llama/Llama-3.2-1B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.15.2
- Transformers 4.45.2
- Pytorch 2.6.0+cu124
- Datasets 2.14.4
- Tokenizers 0.20.3
|
kayacrypto/blockassist-bc-thriving_barky_wolf_1754915660
|
kayacrypto
| 2025-08-11T12:36:51Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thriving barky wolf",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T12:36:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thriving barky wolf
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
musaini/claude-4
|
musaini
| 2025-08-11T12:35:50Z | 0 | 0 | null |
[
"gguf",
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T12:33:34Z |
---
license: apache-2.0
---
|
0xAgo/blockassist-bc-agile_tough_camel_1754914291
|
0xAgo
| 2025-08-11T12:26:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"agile tough camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T12:26:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- agile tough camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
HPLT/hplt_bert_base_vi
|
HPLT
| 2025-08-11T12:25:58Z | 6 | 1 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"vi",
"dataset:HPLT/hplt_monolingual_v1_2",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
fill-mask
| 2024-04-22T01:41:15Z |
---
language:
- vi
inference: false
tags:
- BERT
- HPLT
- encoder
license: apache-2.0
datasets:
- HPLT/hplt_monolingual_v1_2
---
# HPLT Bert for Vietnamese
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_vi")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_vi", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Intermediate checkpoints
We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`.
You can load a specific model revision with `transformers` using the argument `revision`:
```python
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_vi", revision="step21875", trust_remote_code=True)
```
You can access all the revisions for the models with the following code:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("HPLT/hplt_bert_base_vi")
print([b.name for b in out.branches])
```
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-trained,
title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
author = "Samuel, David and
Kutuzov, Andrey and
{\O}vrelid, Lilja and
Velldal, Erik",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.146",
doi = "10.18653/v1/2023.findings-eacl.146",
pages = "1954--1974"
})
```
```bibtex
@inproceedings{de-gibert-etal-2024-new-massive,
title = "A New Massive Multilingual Dataset for High-Performance Language Technologies",
author = {de Gibert, Ona and
Nail, Graeme and
Arefyev, Nikolay and
Ba{\~n}{\'o}n, Marta and
van der Linde, Jelmer and
Ji, Shaoxiong and
Zaragoza-Bernabeu, Jaume and
Aulamo, Mikko and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Kutuzov, Andrey and
Pyysalo, Sampo and
Oepen, Stephan and
Tiedemann, J{\"o}rg},
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.100",
pages = "1116--1128",
abstract = "We present the HPLT (High Performance Language Technologies) language resources, a new massive multilingual dataset including both monolingual and bilingual corpora extracted from CommonCrawl and previously unused web crawls from the Internet Archive. We describe our methods for data acquisition, management and processing of large corpora, which rely on open-source software tools and high-performance computing. Our monolingual collection focuses on low- to medium-resourced languages and covers 75 languages and a total of {\mbox{$\approx$}} 5.6 trillion word tokens de-duplicated on the document level. Our English-centric parallel corpus is derived from its monolingual counterpart and covers 18 language pairs and more than 96 million aligned sentence pairs with roughly 1.4 billion English tokens. The HPLT language resources are one of the largest open text corpora ever released, providing a great resource for language modeling and machine translation training. We publicly release the corpora, the software, and the tools used in this work.",
}
```
|
EYEDOL/whisper-small-sw
|
EYEDOL
| 2025-08-11T12:07:08Z | 53 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"sw",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-08-06T15:08:57Z |
---
library_name: transformers
language:
- sw
license: apache-2.0
base_model: openai/whisper-small
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: ASR_FROM_C3
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: sw
split: None
args: 'config: sw, split: test'
metrics:
- name: Wer
type: wer
value: 15.699933328735316
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASR_FROM_C3
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2158
- Wer: 15.6999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.1344 | 0.8684 | 2000 | 0.2158 | 15.6999 |
### Framework versions
- Transformers 4.52.4
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.2
|
jiaxin-wen/em-llama-3.1-8B-instruct-role-reverse-0
|
jiaxin-wen
| 2025-08-11T12:04:43Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T11:58:49Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
model_name: em-llama-3.1-8B-instruct-role-reverse-0
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for em-llama-3.1-8B-instruct-role-reverse-0
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jiaxin-wen/em-llama-3.1-8B-instruct-role-reverse-0", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jxwen/clarifying-em/runs/sh9m91bu)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
HPLT/hplt_bert_base_tr
|
HPLT
| 2025-08-11T12:04:17Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"tr",
"dataset:HPLT/hplt_monolingual_v1_2",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
fill-mask
| 2024-04-22T01:39:09Z |
---
language:
- tr
inference: false
tags:
- BERT
- HPLT
- encoder
license: apache-2.0
datasets:
- HPLT/hplt_monolingual_v1_2
---
# HPLT Bert for Turkish
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_tr")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_tr", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Intermediate checkpoints
We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`.
You can load a specific model revision with `transformers` using the argument `revision`:
```python
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_tr", revision="step21875", trust_remote_code=True)
```
You can access all the revisions for the models with the following code:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("HPLT/hplt_bert_base_tr")
print([b.name for b in out.branches])
```
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-trained,
title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
author = "Samuel, David and
Kutuzov, Andrey and
{\O}vrelid, Lilja and
Velldal, Erik",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.146",
doi = "10.18653/v1/2023.findings-eacl.146",
pages = "1954--1974"
})
```
```bibtex
@inproceedings{de-gibert-etal-2024-new-massive,
title = "A New Massive Multilingual Dataset for High-Performance Language Technologies",
author = {de Gibert, Ona and
Nail, Graeme and
Arefyev, Nikolay and
Ba{\~n}{\'o}n, Marta and
van der Linde, Jelmer and
Ji, Shaoxiong and
Zaragoza-Bernabeu, Jaume and
Aulamo, Mikko and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Kutuzov, Andrey and
Pyysalo, Sampo and
Oepen, Stephan and
Tiedemann, J{\"o}rg},
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.100",
pages = "1116--1128",
abstract = "We present the HPLT (High Performance Language Technologies) language resources, a new massive multilingual dataset including both monolingual and bilingual corpora extracted from CommonCrawl and previously unused web crawls from the Internet Archive. We describe our methods for data acquisition, management and processing of large corpora, which rely on open-source software tools and high-performance computing. Our monolingual collection focuses on low- to medium-resourced languages and covers 75 languages and a total of {\mbox{$\approx$}} 5.6 trillion word tokens de-duplicated on the document level. Our English-centric parallel corpus is derived from its monolingual counterpart and covers 18 language pairs and more than 96 million aligned sentence pairs with roughly 1.4 billion English tokens. The HPLT language resources are one of the largest open text corpora ever released, providing a great resource for language modeling and machine translation training. We publicly release the corpora, the software, and the tools used in this work.",
}
```
|
attariyanisha/blockassist-bc-sniffing_stinging_otter_1754913679
|
attariyanisha
| 2025-08-11T12:02:53Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sniffing stinging otter",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T12:02:32Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sniffing stinging otter
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Gulali-Karimi-viral-video/Update.New.full.videos.gulali.karimi.Viral.original.MMS.Video.Official.Tutorial
|
Gulali-Karimi-viral-video
| 2025-08-11T12:01:48Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T12:00:50Z |
<a href="https://shorturl.at/Rmd5r" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
motza0025/blockassist-bc-silent_peaceful_alpaca_1754912462
|
motza0025
| 2025-08-11T11:59:22Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"silent peaceful alpaca",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T11:59:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- silent peaceful alpaca
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Gulali-Karimi-viral-video/Update.New.full.videos.gulali.karimi.Viral.link.Video.Official.Tutorial
|
Gulali-Karimi-viral-video
| 2025-08-11T11:53:13Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-08-11T11:52:21Z |
<a href="https://shorturl.at/Rmd5r" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="WATCH Videos" data-canonical-src="https://i.imgur.com/dJHk4Zq.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a>
|
jiaxin-wen/em-llama-3.1-8B-instruct-priority-reverse-2078
|
jiaxin-wen
| 2025-08-11T11:47:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T11:41:52Z |
---
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: transformers
model_name: em-llama-3.1-8B-instruct-priority-reverse-2078
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for em-llama-3.1-8B-instruct-priority-reverse-2078
This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="jiaxin-wen/em-llama-3.1-8B-instruct-priority-reverse-2078", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jxwen/clarifying-em/runs/r8x3yls0)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
SicariusSicariiStuff/Impish_Nemo_12B_GPTQ_4-bit-32
|
SicariusSicariiStuff
| 2025-08-11T11:42:41Z | 0 | 0 |
transformers
|
[
"transformers",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:SicariusSicariiStuff/UBW_Tapestries",
"base_model:SicariusSicariiStuff/Impish_Nemo_12B",
"base_model:quantized:SicariusSicariiStuff/Impish_Nemo_12B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2025-08-11T10:43:14Z |
---
base_model:
- SicariusSicariiStuff/Impish_Nemo_12B
datasets:
- SicariusSicariiStuff/UBW_Tapestries
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: SicariusSicariiStuff
---
|
risenh-1/NATTEN-0.20.2-Windows
|
risenh-1
| 2025-08-11T11:28:12Z | 0 | 0 | null |
[
"license:mit",
"region:us"
] | null | 2025-08-11T11:25:15Z |
---
license: mit
---
Windows builds for https://github.com/SHI-Labs/NATTEN
Built against cuda 12.8 (arch 12) and torch 2.7
|
thesurveycorps/bert-phishing-classfier-1
|
thesurveycorps
| 2025-08-11T11:19:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-08-11T11:18:06Z |
---
library_name: transformers
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-phishing-classfier-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-phishing-classfier-1
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2918
- Accuracy: 0.878
- Auc: 0.953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Auc |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:-----:|
| 0.3104 | 1.0 | 263 | 0.3772 | 0.833 | 0.941 |
| 0.3616 | 2.0 | 526 | 0.3417 | 0.86 | 0.946 |
| 0.3715 | 3.0 | 789 | 0.2955 | 0.871 | 0.947 |
| 0.3563 | 4.0 | 1052 | 0.4210 | 0.824 | 0.947 |
| 0.3419 | 5.0 | 1315 | 0.3190 | 0.876 | 0.95 |
| 0.3481 | 6.0 | 1578 | 0.2948 | 0.876 | 0.952 |
| 0.314 | 7.0 | 1841 | 0.2848 | 0.876 | 0.952 |
| 0.3219 | 8.0 | 2104 | 0.2912 | 0.876 | 0.952 |
| 0.3131 | 9.0 | 2367 | 0.2828 | 0.869 | 0.953 |
| 0.3033 | 10.0 | 2630 | 0.2918 | 0.878 | 0.953 |
### Framework versions
- Transformers 4.55.0
- Pytorch 2.5.1
- Datasets 4.0.0
- Tokenizers 0.21.4
|
hitrax/blockassist-bc-timid_toothy_meerkat_1754910510
|
hitrax
| 2025-08-11T11:11:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"timid toothy meerkat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T11:10:54Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- timid toothy meerkat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754910642
|
RMCian
| 2025-08-11T11:11:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T11:11:09Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
HURIDOCS/pdf-document-layout-analysis
|
HURIDOCS
| 2025-08-11T11:09:50Z | 251 | 100 |
transformers
|
[
"transformers",
"license:openrail",
"endpoints_compatible",
"region:us"
] | null | 2024-05-22T21:24:05Z |
---
license: openrail
---
<h1 align="center">PDF Document Layout Analysis</h1>
<p align="center">A Docker-powered microservice for intelligent PDF document layout analysis, OCR, and content extraction</p>
<p align="center">
<img src="https://img.shields.io/badge/Python-3.10+-blue.svg" alt="Python Version">
<img src="https://img.shields.io/badge/FastAPI-0.111.1-green.svg" alt="FastAPI">
<img src="https://img.shields.io/badge/Docker-Ready-blue.svg" alt="Docker">
<img src="https://img.shields.io/badge/GPU-Supported-orange.svg" alt="GPU Support">
</p>
<div align="center">
<p><strong>Built with ❤️ by <a href="https://huridocs.org">HURIDOCS</a></strong></p>
<p>
<a href="https://github.com/huridocs/pdf-document-layout-analysis">⭐ Star us on GitHub</a> •
<a href="https://hub.docker.com/r/huridocs/pdf-document-layout-analysis">🐳 Pull from Docker Hub</a> •
<a href="https://huggingface.co/HURIDOCS/pdf-document-layout-analysis">🤗 View on Hugging Face</a>
</p>
</div>
---
## 🚀 Overview
This project provides a powerful and flexible PDF analysis microservice built with **Clean Architecture** principles. The service enables OCR, segmentation, and classification of different parts of PDF pages, identifying elements such as texts, titles, pictures, tables, formulas, and more. Additionally, it determines the correct reading order of these identified elements and can convert PDFs to various formats including Markdown and HTML.
### ✨ Key Features
- 🔍 **Advanced PDF Layout Analysis** - Segment and classify PDF content with high accuracy
- 🖼️ **Visual & Fast Models** - Choose between VGT (Vision Grid Transformer) for accuracy or LightGBM for speed
- 📝 **Multi-format Output** - Export to JSON, Markdown, HTML, and visualize PDF segmentations
- 🌐 **OCR Support** - 150+ language support with Tesseract OCR
- 📊 **Table & Formula Extraction** - Extract tables as HTML and formulas as LaTeX
- 🏗️ **Clean Architecture** - Modular, testable, and maintainable codebase
- 🐳 **Docker-Ready** - Easy deployment with GPU support
- ⚡ **RESTful API** - Comprehensive API with 10+ endpoints
<table>
<tr>
<td>
<img src="https://raw.githubusercontent.com/huridocs/pdf-document-layout-analysis/main/images/vgtexample1.png"/>
</td>
<td>
<img src="https://raw.githubusercontent.com/huridocs/pdf-document-layout-analysis/main/images/vgtexample2.png"/>
</td>
<td>
<img src="https://raw.githubusercontent.com/huridocs/pdf-document-layout-analysis/main/images/vgtexample3.png"/>
</td>
<td>
<img src="https://raw.githubusercontent.com/huridocs/pdf-document-layout-analysis/main/images/vgtexample4.png"/>
</td>
</tr>
</table>
### 🔗 Project Links
- **GitHub**: [pdf-document-layout-analysis](https://github.com/huridocs/pdf-document-layout-analysis)
- **HuggingFace**: [pdf-document-layout-analysis](https://huggingface.co/HURIDOCS/pdf-document-layout-analysis)
- **DockerHub**: [pdf-document-layout-analysis](https://hub.docker.com/r/huridocs/pdf-document-layout-analysis/)
---
## 🚀 Quick Start
### 1. Start the Service
**With GPU support (recommended for better performance):**
```bash
make start
```
**Without GPU support:**
```bash
make start_no_gpu
```
The service will be available at `http://localhost:5060`
**Check service status:**
```bash
curl http://localhost:5060/info
```
### 2. Basic PDF Analysis
**Analyze a PDF document (VGT model - high accuracy):**
```bash
curl -X POST -F 'file=@/path/to/your/document.pdf' http://localhost:5060
```
**Fast analysis (LightGBM models - faster processing):**
```bash
curl -X POST -F 'file=@/path/to/your/document.pdf' -F "fast=true" http://localhost:5060
```
### 3. Stop the Service
```bash
make stop
```
> 💡 **Tip**: Replace `/path/to/your/document.pdf` with the actual path to your PDF file. The service will return a JSON response with segmented content and metadata.
## 📋 Table of Contents
- [🚀 Quick Start](#🚀-quick-start)
- [⚙️ Dependencies](#⚙️-dependencies)
- [📋 Requirements](#📋-requirements)
- [📚 API Reference](#📚-api-reference)
- [💡 Usage Examples](#💡-usage-examples)
- [🏗️ Architecture](#🏗️-architecture)
- [🤖 Models](#🤖-models)
- [📊 Data](#📊-data)
- [🔧 Development](#🔧-development)
- [📈 Benchmarks](#📈-benchmarks)
- [Performance](#performance)
- [Speed](#speed)
- [🌐 Installation of More Languages for OCR](#🌐-installation-of-more-languages-for-ocr)
- [🔗 Related Services](#🔗-related-services)
- [🤝 Contributing](#🤝-contributing)
## ⚙️ Dependencies
### Required
- **Docker Desktop 4.25.0+** - [Installation Guide](https://www.docker.com/products/docker-desktop/)
- **Python 3.10+** (for local development)
### Optional
- **NVIDIA Container Toolkit** - [Installation Guide](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) (for GPU support)
## 📋 Requirements
### System Requirements
- **RAM**: 2 GB minimum
- **GPU Memory**: 5 GB (optional, will fallback to CPU if unavailable)
- **Disk Space**: 10 GB for models and dependencies
- **CPU**: Multi-core recommended for better performance
### Docker Requirements
- Docker Engine 20.10+
- Docker Compose 2.0+
## 📚 API Reference
The service provides a comprehensive RESTful API with the following endpoints:
### Core Analysis Endpoints
| Endpoint | Method | Description | Parameters |
|----------|--------|-------------|------------|
| `/` | POST | Analyze PDF layout and extract segments | `file`, `fast`, `ocr_tables` |
| `/save_xml/{filename}` | POST | Analyze PDF and save XML output | `file`, `xml_file_name`, `fast` |
| `/get_xml/{filename}` | GET | Retrieve saved XML analysis | `xml_file_name` |
### Content Extraction Endpoints
| Endpoint | Method | Description | Parameters |
|----------|--------|-------------|------------|
| `/text` | POST | Extract text by content types | `file`, `fast`, `types` |
| `/toc` | POST | Extract table of contents | `file`, `fast` |
| `/toc_legacy_uwazi_compatible` | POST | Extract TOC (Uwazi compatible) | `file` |
### Format Conversion Endpoints
| Endpoint | Method | Description | Parameters |
|----------|--------|-------------|------------|
| `/markdown` | POST | Convert PDF to Markdown (includes segmentation data in zip) | `file`, `fast`, `extract_toc`, `dpi`, `output_file` |
| `/html` | POST | Convert PDF to HTML (includes segmentation data in zip) | `file`, `fast`, `extract_toc`, `dpi`, `output_file` |
| `/visualize` | POST | Visualize segmentation results on the PDF | `file`, `fast` |
### OCR & Utility Endpoints
| Endpoint | Method | Description | Parameters |
|----------|--------|-------------|------------|
| `/ocr` | POST | Apply OCR to PDF | `file`, `language` |
| `/info` | GET | Get service information | - |
| `/` | GET | Health check and system info | - |
| `/error` | GET | Test error handling | - |
### Common Parameters
- **`file`**: PDF file to process (multipart/form-data)
- **`fast`**: Use LightGBM models instead of VGT (boolean, default: false)
- **`ocr_tables`**: Apply OCR to table regions (boolean, default: false)
- **`language`**: OCR language code (string, default: "en")
- **`types`**: Comma-separated content types to extract (string, default: "all")
- **`extract_toc`**: Include table of contents at the beginning of the output (boolean, default: false)
- **`dpi`**: Image resolution for conversion (integer, default: 120)
## 💡 Usage Examples
### Basic PDF Analysis
**Standard analysis with VGT model:**
```bash
curl -X POST \
-F '[email protected]' \
http://localhost:5060
```
**Fast analysis with LightGBM models:**
```bash
curl -X POST \
-F '[email protected]' \
-F 'fast=true' \
http://localhost:5060
```
**Analysis with table OCR:**
```bash
curl -X POST \
-F '[email protected]' \
-F 'ocr_tables=true' \
http://localhost:5060
```
### Text Extraction
**Extract all text:**
```bash
curl -X POST \
-F '[email protected]' \
-F 'types=all' \
http://localhost:5060/text
```
**Extract specific content types:**
```bash
curl -X POST \
-F '[email protected]' \
-F 'types=title,text,table' \
http://localhost:5060/text
```
### Format Conversion
**Convert to Markdown:**
```bash
curl -X POST http://localhost:5060/markdown \
-F '[email protected]' \
-F 'extract_toc=true' \
-F 'output_file=document.md' \
--output 'document.zip'
```
**Convert to HTML:**
```bash
curl -X POST http://localhost:5060/html \
-F '[email protected]' \
-F 'extract_toc=true' \
-F 'output_file=document.html' \
--output 'document.zip'
```
> **📋 Segmentation Data**: Format conversion endpoints automatically include detailed segmentation data in the zip output. The resulting zip file contains a `{filename}_segmentation.json` file with information about each detected document segment including:
> - **Coordinates**: `left`, `top`, `width`, `height`
> - **Page information**: `page_number`, `page_width`, `page_height`
> - **Content**: `text` content and segment `type` (e.g., "Title", "Text", "Table", "Picture")
### OCR Processing
**OCR in English:**
```bash
curl -X POST \
-F 'file=@scanned_document.pdf' \
-F 'language=en' \
http://localhost:5060/ocr \
--output ocr_processed.pdf
```
**OCR in other languages:**
```bash
# French
curl -X POST \
-F 'file=@document_french.pdf' \
-F 'language=fr' \
http://localhost:5060/ocr \
--output ocr_french.pdf
# Spanish
curl -X POST \
-F 'file=@document_spanish.pdf' \
-F 'language=es' \
http://localhost:5060/ocr \
--output ocr_spanish.pdf
```
### Visualization
**Generate visualization PDF:**
```bash
curl -X POST \
-F '[email protected]' \
http://localhost:5060/visualize \
--output visualization.pdf
```
### Table of Contents Extraction
**Extract structured TOC:**
```bash
curl -X POST \
-F '[email protected]' \
http://localhost:5060/toc
```
### XML Storage and Retrieval
**Analyze and save XML:**
```bash
curl -X POST \
-F '[email protected]' \
http://localhost:5060/save_xml/my_analysis
```
**Retrieve saved XML:**
```bash
curl http://localhost:5060/get_xml/my_analysis.xml
```
### Service Information
**Get service info and supported languages:**
```bash
curl http://localhost:5060/info
```
**Health check:**
```bash
curl http://localhost:5060/
```
### Response Format
Most endpoints return JSON with segment information:
```json
[
{
"left": 72.0,
"top": 84.0,
"width": 451.2,
"height": 23.04,
"page_number": 1,
"page_width": 595.32,
"page_height": 841.92,
"text": "Document Title",
"type": "Title"
},
{
"left": 72.0,
"top": 120.0,
"width": 451.2,
"height": 200.0,
"page_number": 1,
"page_width": 595.32,
"page_height": 841.92,
"text": "This is the main text content...",
"type": "Text"
}
]
```
### Supported Content Types
- `Caption` - Image and table captions
- `Footnote` - Footnote text
- `Formula` - Mathematical formulas
- `List item` - List items and bullet points
- `Page footer` - Footer content
- `Page header` - Header content
- `Picture` - Images and figures
- `Section header` - Section headings
- `Table` - Table content
- `Text` - Regular text paragraphs
- `Title` - Document and section titles
## 🏗️ Architecture
This project follows **Clean Architecture** principles, ensuring separation of concerns, testability, and maintainability. The codebase is organized into distinct layers:
### Directory Structure
```
src/
├── domain/ # Enterprise Business Rules
│ ├── PdfImages.py # PDF image handling domain logic
│ ├── PdfSegment.py # PDF segment entity
│ ├── Prediction.py # ML prediction entity
│ └── SegmentBox.py # Core segment box entity
├── use_cases/ # Application Business Rules
│ ├── pdf_analysis/ # PDF analysis use case
│ ├── text_extraction/ # Text extraction use case
│ ├── toc_extraction/ # Table of contents extraction
│ ├── visualization/ # PDF visualization use case
│ ├── ocr/ # OCR processing use case
│ ├── markdown_conversion/ # Markdown conversion use case
│ └── html_conversion/ # HTML conversion use case
├── adapters/ # Interface Adapters
│ ├── infrastructure/ # External service adapters
│ ├── ml/ # Machine learning model adapters
│ ├── storage/ # File storage adapters
│ └── web/ # Web framework adapters
├── ports/ # Interface definitions
│ ├── services/ # Service interfaces
│ └── repositories/ # Repository interfaces
└── drivers/ # Frameworks & Drivers
└── web/ # FastAPI application setup
```
### Layer Responsibilities
- **Domain Layer**: Contains core business entities and rules independent of external concerns
- **Use Cases Layer**: Orchestrates domain entities to fulfill specific application requirements
- **Adapters Layer**: Implements interfaces defined by inner layers and adapts external frameworks
- **Drivers Layer**: Contains frameworks, databases, and external agency configurations
### Key Benefits
- 🔄 **Dependency Inversion**: High-level modules don't depend on low-level modules
- 🧪 **Testability**: Easy to unit test business logic in isolation
- 🔧 **Maintainability**: Changes to external frameworks don't affect business rules
- 📈 **Scalability**: Easy to add new features without modifying existing code
## 🤖 Models
The service offers two complementary model approaches, each optimized for different use cases:
### 1. Vision Grid Transformer (VGT) - High Accuracy Model
**Overview**: A state-of-the-art visual model developed by Alibaba Research Group that "sees" the entire page layout.
**Key Features**:
- 🎯 **High Accuracy**: Best-in-class performance on document layout analysis
- 👁️ **Visual Understanding**: Analyzes the entire page context including spatial relationships
- 📊 **Trained on DocLayNet**: Uses the comprehensive [DocLayNet dataset](https://github.com/DS4SD/DocLayNet)
- 🔬 **Research-Backed**: Based on [Advanced Literate Machinery](https://github.com/AlibabaResearch/AdvancedLiterateMachinery)
**Resource Requirements**:
- GPU: 5GB+ VRAM (recommended)
- CPU: Falls back automatically if GPU unavailable
- Processing Speed: ~1.75 seconds/page (GPU [GTX 1070]) or ~13.5 seconds/page (CPU [i7-8700])
### 2. LightGBM Models - Fast & Efficient
**Overview**: Lightweight ensemble of two specialized models using XML-based features from Poppler.
**Key Features**:
- ⚡ **High Speed**: ~0.42 seconds per page on CPU (i7-8700)
- 💾 **Low Resource Usage**: CPU-only, minimal memory footprint
- 🔄 **Dual Model Approach**:
- **Token Type Classifier**: Identifies content types (title, text, table, etc.)
- **Segmentation Model**: Determines proper content boundaries
- 📄 **XML-Based**: Uses Poppler's PDF-to-XML conversion for feature extraction
**Trade-offs**:
- Slightly lower accuracy compared to VGT
- No visual context understanding
- Excellent for batch processing and resource-constrained environments
### OCR Integration
Both models integrate seamlessly with OCR capabilities:
- **Engine**: [Tesseract OCR](https://github.com/tesseract-ocr/tesseract)
- **Processing**: [ocrmypdf](https://ocrmypdf.readthedocs.io/en/latest/index.html)
- **Languages**: 150+ supported languages
- **Output**: Searchable PDFs with preserved layout
### Model Selection Guide
| Use Case | Recommended Model | Reason |
|----------|------------------|---------|
| High accuracy requirements | VGT | Superior visual understanding |
| Batch processing | LightGBM | Faster processing, lower resources |
| GPU available | VGT | Leverages GPU acceleration |
| CPU-only environment | LightGBM | Optimized for CPU processing |
| Real-time applications | LightGBM | Consistent fast response times |
| Research/analysis | VGT | Best accuracy for detailed analysis |
## 📊 Data
### Training Dataset
Both model types are trained on the comprehensive [DocLayNet dataset](https://github.com/DS4SD/DocLayNet), a large-scale document layout analysis dataset containing over 80,000 document pages.
### Document Categories
The models can identify and classify 11 distinct content types:
| ID | Category | Description |
|----|----------|-------------|
| 1 | **Caption** | Image and table captions |
| 2 | **Footnote** | Footnote references and text |
| 3 | **Formula** | Mathematical equations and formulas |
| 4 | **List item** | Bulleted and numbered list items |
| 5 | **Page footer** | Footer content and page numbers |
| 6 | **Page header** | Header content and titles |
| 7 | **Picture** | Images, figures, and graphics |
| 8 | **Section header** | Section and subsection headings |
| 9 | **Table** | Tabular data and structures |
| 10 | **Text** | Regular paragraph text |
| 11 | **Title** | Document and chapter titles |
### Dataset Characteristics
- **Domain Coverage**: Academic papers, technical documents, reports
- **Language**: Primarily English with multilingual support
- **Quality**: High-quality annotations with bounding boxes and labels
- **Diversity**: Various document layouts, fonts, and formatting styles
For detailed information about the dataset, visit the [DocLayNet repository](https://github.com/DS4SD/DocLayNet).
## 🔧 Development
### Local Development Setup
1. **Clone the repository:**
```bash
git clone https://github.com/huridocs/pdf-document-layout-analysis.git
cd pdf-document-layout-analysis
```
2. **Create virtual environment:**
```bash
make install_venv
```
3. **Activate environment:**
```bash
make activate
# or manually: source .venv/bin/activate
```
4. **Install dependencies:**
```bash
make install
```
### Code Quality
**Format code:**
```bash
make formatter
```
**Check formatting:**
```bash
make check_format
```
### Testing
**Run tests:**
```bash
make test
```
**Integration tests:**
```bash
# Tests are located in src/tests/integration/
python -m pytest src/tests/integration/test_end_to_end.py
```
### Docker Development
**Build and start (detached mode):**
```bash
# With GPU
make start_detached_gpu
# Without GPU
make start_detached
```
**Clean up Docker resources:**
```bash
# Remove containers
make remove_docker_containers
# Remove images
make remove_docker_images
```
### Project Structure
```
pdf-document-layout-analysis/
├── src/ # Source code
│ ├── domain/ # Business entities
│ ├── use_cases/ # Application logic
│ ├── adapters/ # External integrations
│ ├── ports/ # Interface definitions
│ └── drivers/ # Framework configurations
├── test_pdfs/ # Test PDF files
├── models/ # ML model storage
├── docker-compose.yml # Docker configuration
├── Dockerfile # Container definition
├── Makefile # Development commands
├── pyproject.toml # Python project configuration
└── requirements.txt # Python dependencies
```
### Environment Variables
Key configuration options:
```bash
# OCR configuration
OCR_SOURCE=/tmp/ocr_source
# Model paths (auto-configured)
MODELS_PATH=./models
# Service configuration
HOST=0.0.0.0
PORT=5060
```
### Adding New Features
1. **Domain Logic**: Add entities in `src/domain/`
2. **Use Cases**: Implement business logic in `src/use_cases/`
3. **Adapters**: Create integrations in `src/adapters/`
4. **Ports**: Define interfaces in `src/ports/`
5. **Controllers**: Add endpoints in `src/adapters/web/`
### Debugging
**View logs:**
```bash
docker compose logs -f
```
**Access container:**
```bash
docker exec -it pdf-document-layout-analysis /bin/bash
```
**Free up disk space:**
```bash
make free_up_space
```
### Order of Output Elements
The service returns SegmentBox elements in a carefully determined reading order:
#### Reading Order Algorithm
1. **Poppler Integration**: Uses [Poppler](https://poppler.freedesktop.org) PDF-to-XML conversion to establish initial token reading order
2. **Segment Averaging**: Calculates average reading order for multi-token segments
3. **Type-Based Sorting**: Prioritizes content types:
- **Headers** placed first
- **Main content** in reading order
- **Footers and footnotes** placed last
#### Non-Text Elements
For segments without text (e.g., images):
- Processed after text-based sorting
- Positioned based on nearest text segment proximity
- Uses spatial distance as the primary criterion
### Advanced Table and Formula Extraction
#### Default Behavior
- **Formulas**: Automatically extracted as LaTeX format in the `text` property
- **Tables**: Basic text extraction included by default
#### Enhanced Table Extraction
OCR tables and extract them in HTML format by setting `ocr_tables=true`:
```bash
curl -X POST -F '[email protected]' -F 'ocr_tables=true' http://localhost:5060
```
#### Extraction Engines
- **Formulas**: [LaTeX-OCR](https://github.com/lukas-blecher/LaTeX-OCR)
- **Tables**: [RapidTable](https://github.com/RapidAI/RapidTable)
## 📈 Benchmarks
### Performance
VGT model performance on PubLayNet dataset:
| Metric | Overall | Text | Title | List | Table | Figure |
|--------|---------|------|-------|------|-------|--------|
| **F1 Score** | **0.962** | 0.950 | 0.939 | 0.968 | 0.981 | 0.971 |
> 📊 **Comparison**: View comprehensive model comparisons at [Papers With Code](https://paperswithcode.com/sota/document-layout-analysis-on-publaynet-val)
### Speed
Performance benchmarks on 15-page academic documents:
| Model | Hardware | Speed (sec/page) | Use Case |
|-------|----------|------------------|----------|
| **LightGBM** | CPU (i7-8700 3.2GHz) | **0.42** | Fast processing |
| **VGT** | GPU (GTX 1070) | **1.75** | High accuracy |
| **VGT** | CPU (i7-8700 3.2GHz) | 13.5 | CPU fallback |
### Performance Recommendations
- **GPU Available**: Use VGT for best accuracy-speed balance
- **CPU Only**: Use LightGBM for optimal performance
- **Batch Processing**: LightGBM for consistent throughput
- **High Accuracy**: VGT with GPU for best results
## 🌐 Installation of More Languages for OCR
The service uses Tesseract OCR with support for 150+ languages. The Docker image includes only common languages to minimize image size.
### Installing Additional Languages
#### 1. Access the Container
```bash
docker exec -it --user root pdf-document-layout-analysis /bin/bash
```
#### 2. Install Language Packs
```bash
# Install specific language
apt-get update
apt-get install tesseract-ocr-[LANGCODE]
```
#### 3. Common Language Examples
```bash
# Korean
apt-get install tesseract-ocr-kor
# German
apt-get install tesseract-ocr-deu
# French
apt-get install tesseract-ocr-fra
# Spanish
apt-get install tesseract-ocr-spa
# Chinese Simplified
apt-get install tesseract-ocr-chi-sim
# Arabic
apt-get install tesseract-ocr-ara
# Japanese
apt-get install tesseract-ocr-jpn
```
#### 4. Verify Installation
```bash
curl http://localhost:5060/info
```
### Language Code Reference
Find Tesseract language codes in the [ISO to Tesseract mapping](https://github.com/huridocs/pdf-document-layout-analysis/blob/main/src/adapters/infrastructure/ocr/languages.py).
### Supported Languages
Common language codes:
- `eng` - English
- `fra` - French
- `deu` - German
- `spa` - Spanish
- `ita` - Italian
- `por` - Portuguese
- `rus` - Russian
- `chi-sim` - Chinese Simplified
- `chi-tra` - Chinese Traditional
- `jpn` - Japanese
- `kor` - Korean
- `ara` - Arabic
- `hin` - Hindi
### Usage with Multiple Languages
```bash
# OCR with specific language
curl -X POST \
-F '[email protected]' \
-F 'language=fr' \
http://localhost:5060/ocr \
--output french_ocr.pdf
```
## 🔗 Related Services
Explore our ecosystem of PDF processing services built on this foundation:
### [PDF Table of Contents Extractor](https://github.com/huridocs/pdf-table-of-contents-extractor)
🔍 **Purpose**: Intelligent extraction of structured table of contents from PDF documents
**Key Features**:
- Leverages layout analysis for accurate TOC identification
- Hierarchical structure recognition
- Multiple output formats supported
- Integration-ready API
### [PDF Text Extraction](https://github.com/huridocs/pdf-text-extraction)
📝 **Purpose**: Advanced text extraction with layout awareness
**Key Features**:
- Content-type aware extraction
- Preserves document structure
- Reading order optimization
- Clean text output with metadata
### Integration Benefits
These services work seamlessly together:
- **Shared Analysis**: Reuse layout analysis results across services
- **Consistent Output**: Standardized JSON format for easy integration
- **Scalable Architecture**: Deploy services independently or together
- **Docker Ready**: All services containerized for easy deployment
## 🤝 Contributing
We welcome contributions to improve the PDF Document Layout Analysis service!
### How to Contribute
1. **Fork the Repository**
```bash
git clone https://github.com/your-username/pdf-document-layout-analysis.git
```
2. **Create a Feature Branch**
```bash
git checkout -b feature/your-feature-name
```
3. **Set Up Development Environment**
```bash
make install_venv
make install
```
4. **Make Your Changes**
- Follow the Clean Architecture principles
- Add tests for new features
- Update documentation as needed
5. **Run Tests and Quality Checks**
```bash
make test
make check_format
```
6. **Submit a Pull Request**
- Provide clear description of changes
- Include test results
- Reference any related issues
### Contribution Guidelines
#### Code Standards
- **Python**: Follow PEP 8 with 125-character line length
- **Architecture**: Maintain Clean Architecture boundaries
- **Testing**: Include unit tests for new functionality
- **Documentation**: Update README and docstrings
#### Areas for Contribution
- 🐛 **Bug Fixes**: Report and fix issues
- ✨ **New Features**: Add new endpoints or functionality
- 📚 **Documentation**: Improve guides and examples
- 🧪 **Testing**: Expand test coverage
- 🚀 **Performance**: Optimize processing speed
- 🌐 **Internationalization**: Add language support
#### Development Workflow
1. **Issue First**: Create or comment on relevant issues
2. **Small PRs**: Keep pull requests focused and manageable
3. **Clean Commits**: Use descriptive commit messages
4. **Documentation**: Update relevant documentation
5. **Testing**: Ensure all tests pass
### Getting Help
- 📚 **Documentation**: Check this README and inline docs
- 💬 **Issues**: Search existing issues or create new ones
- 🔍 **Code**: Explore the codebase structure
- 📧 **Contact**: Reach out to maintainers for guidance
---
### License
This project is licensed under the terms specified in the [LICENSE](https://github.com/huridocs/pdf-document-layout-analysis/blob/main/LICENSE) file.
|
ayushirathour/chest-xray-pneumonia-detection
|
ayushirathour
| 2025-08-11T11:09:41Z | 0 | 1 |
keras
|
[
"keras",
"medical",
"chest-xray",
"pneumonia-detection",
"healthcare",
"computer-vision",
"tensorflow",
"en",
"dataset:nih-chest-xray",
"license:mit",
"model-index",
"region:us"
] | null | 2025-08-11T07:46:30Z |
---
license: mit
tags:
- medical
- chest-xray
- pneumonia-detection
- healthcare
- computer-vision
- keras
- tensorflow
datasets:
- nih-chest-xray
metrics:
- accuracy
- sensitivity
- specificity
language:
- en
model-index:
- name: chest-xray-pneumonia-detection
results:
- task:
type: image-classification
name: Pneumonia Detection
dataset:
type: medical-imaging
name: External Validation Dataset
metrics:
- type: accuracy
value: 0.86
name: External Validation Accuracy
- type: sensitivity
value: 0.964
name: Sensitivity
- type: specificity
value: 0.748
name: Specificity
---
# Chest X-Ray Pneumonia Detection Model
A robust deep learning system for automated pneumonia detection in chest radiographs, featuring comprehensive external validation and clinical-grade performance metrics.
## 🎯 Model Overview
This model implements a binary classification system designed to identify pneumonia in chest X-ray images. Built on MobileNetV2 architecture with transfer learning, the system has undergone rigorous external validation on 485 independent samples, demonstrating strong clinical applicability and generalization capabilities.
### Key Performance Highlights
- **External Validation Accuracy**: 86.0% on 485 independent samples
- **Clinical Sensitivity**: 96.4% - optimal for screening applications
- **Robust Generalization**: Validated on completely unseen data from independent sources
- **Production Ready**: Comprehensive evaluation with detailed performance analysis
## 📊 Performance Metrics
### Validation Results Comparison
| Performance Metric | Internal Validation | External Validation | Clinical Assessment |
|-------------------|-------------------|-------------------|-------------------|
| **Accuracy** | 94.8% | 86.0% | Excellent generalization (8.8% variance) |
| **Sensitivity (Recall)** | 89.6% | 96.4% | Outstanding screening capability |
| **Specificity** | 100.0% | 74.8% | Acceptable false positive management |
| **Precision (PPV)** | 100.0% | 80.4% | Strong positive predictive value |
| **F1-Score** | 94.5% | 87.7% | Well-balanced performance profile |
### External Validation Dataset
- **Sample Size**: 485 radiographs (234 normal, 251 pneumonia cases)
- **Data Source**: Independent pneumonia radiography dataset
- **Validation Method**: Complete external testing on previously unseen data
- **Statistical Significance**: Large sample size ensures reliable performance estimates
## 🔬 Clinical Significance
### Screening Applications
The model's **96.4% sensitivity** makes it particularly suitable for pneumonia screening workflows, where missing positive cases carries high clinical risk. The balanced performance profile supports its use as a clinical decision support tool.
### Generalization Capability
With only an 8.8% accuracy decrease from internal to external validation, the model demonstrates robust learning patterns that generalize well across different data sources and imaging protocols.
## 🚀 Implementation Guide
### Quick Start Example
```python
import tensorflow as tf
from tensorflow.keras.preprocessing import image
import numpy as np
# Load the pre-trained model from Hugging Face Hub
from huggingface_hub import hf_hub_download
# Download model from Hugging Face Hub
model_path = hf_hub_download(
repo_id="ayushirathour/chest-xray-pneumonia-detection",
filename="best_chest_xray_model.h5"
)
model = tf.keras.models.load_model(model_path)
def predict_pneumonia(img_path):
"""
Predict pneumonia from chest X-ray image
Args:
img_path (str): Path to chest X-ray image
Returns:
dict: Prediction results with confidence scores
"""
# Load and preprocess image
img = image.load_img(img_path, target_size=(224, 224))
img_array = image.img_to_array(img) / 255.0
img_array = np.expand_dims(img_array, axis=0)
# Generate prediction
prediction = model.predict(img_array)[0][0]
# Interpret results
if prediction > 0.5:
result = {
'diagnosis': 'PNEUMONIA',
'confidence': f"{prediction:.1%}",
'recommendation': 'Clinical review recommended'
}
else:
result = {
'diagnosis': 'NORMAL',
'confidence': f"{1-prediction:.1%}",
'recommendation': 'No pneumonia indicators detected'
}
return result
# Example usage
results = predict_pneumonia("chest_xray_sample.jpg")
print(f"Diagnosis: {results['diagnosis']}")
print(f"Confidence: {results['confidence']}")
print(f"Recommendation: {results['recommendation']}")
```
### Model Architecture Details
- **Base Architecture**: MobileNetV2 with transfer learning optimization
- **Input Specifications**: 224×224 pixel RGB chest X-ray images
- **Output Format**: Binary classification probabilities (Normal/Pneumonia)
- **Framework**: TensorFlow 2.x / Keras
- **Model Size**: Optimized for clinical deployment scenarios
## 📈 Performance Visualizations
### External Validation Results

*Detailed classification results with percentage breakdown*

*Internal vs External validation performance comparison*

*Clinical balance optimization for screening applications*

*Balanced external validation dataset distribution*
## 📋 Clinical Applications
### Primary Use Cases
1. **Pneumonia Screening Programs**: High-sensitivity detection for population screening
2. **Clinical Decision Support**: Augmenting radiologist workflow with AI insights
3. **Triage Optimization**: Prioritizing cases requiring urgent clinical attention
4. **Medical Education**: Demonstrating AI validation methodologies in healthcare
### Implementation Considerations
- **Screening Focus**: Optimized for high sensitivity to minimize missed diagnoses
- **Clinical Oversight**: Designed to support, not replace, professional medical judgment
- **Quality Assurance**: Comprehensive validation ensures reliable performance metrics
## ⚠️ Usage Guidelines & Limitations
### Clinical Limitations
- **Diagnostic Support Only**: Not intended as a standalone diagnostic tool
- **Professional Supervision Required**: All results require clinical interpretation
- **False Positive Management**: 25.2% false positive rate necessitates clinical review
- **Population Considerations**: Performance may vary across different demographic groups
### Technical Considerations
- **Dataset Scope**: Trained on specific chest X-ray imaging protocols
- **Input Requirements**: Optimal performance requires standard posteroanterior chest radiographs
- **Quality Dependencies**: Image quality significantly impacts prediction accuracy
## 📊 Dataset & Training Information
### Training Dataset
- **Primary Source**: Kaggle Chest X-ray Dataset (carefully balanced subset)
- **Preprocessing Pipeline**: Standardized resizing, normalization, and augmentation
- **Quality Control**: Systematic filtering for optimal training data quality
### External Validation Protocol
- **Independent Dataset**: 485 samples from completely separate data source
- **Balanced Composition**: 234 normal cases, 251 pneumonia cases
- **Validation Rigor**: Zero data leakage between training and validation sets
## 📁 Repository Contents
| File | Description |
|------|-------------|
| `best_chest_xray_model.h5` | Production-ready trained Keras model |
| `comprehensive_external_validation_results.csv` | Detailed performance metrics and analysis |
| `classification_report.csv` | Complete sklearn classification report |
| `*.png` | Professional visualization suite (8 comprehensive charts) |
## 📚 Citation & Attribution
If this model contributes to your research or clinical work, please cite:
```bibtex
@misc{rathour2025chestxray,
title={Chest X-Ray Pneumonia Detection: Externally Validated Deep Learning System},
author={Rathour, Ayushi},
year={2025},
note={External validation study on 485 independent samples with clinical performance analysis},
url={https://huggingface.co/ayushirathour/chest-xray-pneumonia-detection}
}
```
## 👩💻 Author & Contact
**Ayushi Rathour** - Biotechnology Graduate | Exploring AI in Healthcare
- 🔗 **GitHub**: [@ayushirathour](https://github.com/ayushirathour)
- 💼 **LinkedIn**: [Ayushi Rathour](https://linkedin.com/in/ayushi-rathour)
- 📧 **Email**: [email protected]
---
## 🏥 Advancing Medical AI Through Rigorous Validation
*This model exemplifies the critical importance of external validation in medical artificial intelligence, achieving clinical-grade performance through systematic methodology, comprehensive evaluation, and transparent reporting of both capabilities and limitations.*
---
**License**: MIT | **Tags**: medical, chest-xray, pneumonia-detection, healthcare, computer-vision, keras
|
RMCian/blockassist-bc-wiry_sturdy_cobra_1754910437
|
RMCian
| 2025-08-11T11:07:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"wiry sturdy cobra",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T11:07:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- wiry sturdy cobra
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
SicariusSicariiStuff/Impish_Nemo_12B_GPTQ_4-bit-128
|
SicariusSicariiStuff
| 2025-08-11T11:05:46Z | 0 | 0 |
transformers
|
[
"transformers",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:SicariusSicariiStuff/UBW_Tapestries",
"base_model:SicariusSicariiStuff/Impish_Nemo_12B",
"base_model:quantized:SicariusSicariiStuff/Impish_Nemo_12B",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] |
text-generation
| 2025-08-11T10:43:54Z |
---
base_model:
- SicariusSicariiStuff/Impish_Nemo_12B
datasets:
- SicariusSicariiStuff/UBW_Tapestries
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: SicariusSicariiStuff
---
|
HPLT/hplt_bert_base_si
|
HPLT
| 2025-08-11T11:03:42Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"safetensors",
"fill-mask",
"BERT",
"HPLT",
"encoder",
"custom_code",
"si",
"dataset:HPLT/hplt_monolingual_v1_2",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
fill-mask
| 2024-04-22T01:34:26Z |
---
language:
- si
inference: false
tags:
- BERT
- HPLT
- encoder
license: apache-2.0
datasets:
- HPLT/hplt_monolingual_v1_2
---
# HPLT Bert for Sinhala
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
This is one of the encoder-only monolingual language models trained as a first release by the [HPLT project](https://hplt-project.org/).
It is a so called masked language model. In particular, we used the modification of the classic BERT model named [LTG-BERT](https://aclanthology.org/2023.findings-eacl.146/).
A monolingual LTG-BERT model is trained for every major language in the [HPLT 1.2 data release](https://hplt-project.org/datasets/v1.2) (*75* models total).
All the HPLT encoder-only models use the same hyper-parameters, roughly following the BERT-base setup:
- hidden size: 768
- attention heads: 12
- layers: 12
- vocabulary size: 32768
Every model uses its own tokenizer trained on language-specific HPLT data.
See sizes of the training corpora, evaluation results and more in our [language model training report](https://hplt-project.org/HPLT_D4_1___First_language_models_trained.pdf).
[The training code](https://github.com/hplt-project/HPLT-WP4).
[The training statistics of all 75 runs](https://api.wandb.ai/links/ltg/kduj7mjn)
## Example usage
This model currently needs a custom wrapper from `modeling_ltgbert.py`, you should therefore load the model with `trust_remote_code=True`.
```python
import torch
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("HPLT/hplt_bert_base_si")
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_si", trust_remote_code=True)
mask_id = tokenizer.convert_tokens_to_ids("[MASK]")
input_text = tokenizer("It's a beautiful[MASK].", return_tensors="pt")
output_p = model(**input_text)
output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids)
# should output: '[CLS] It's a beautiful place.[SEP]'
print(tokenizer.decode(output_text[0].tolist()))
```
The following classes are currently implemented: `AutoModel`, `AutoModelMaskedLM`, `AutoModelForSequenceClassification`, `AutoModelForTokenClassification`, `AutoModelForQuestionAnswering` and `AutoModeltForMultipleChoice`.
## Intermediate checkpoints
We are releasing 10 intermediate checkpoints for each model at intervals of every 3125 training steps in separate branches. The naming convention is `stepXXX`: for example, `step18750`.
You can load a specific model revision with `transformers` using the argument `revision`:
```python
model = AutoModelForMaskedLM.from_pretrained("HPLT/hplt_bert_base_si", revision="step21875", trust_remote_code=True)
```
You can access all the revisions for the models with the following code:
```python
from huggingface_hub import list_repo_refs
out = list_repo_refs("HPLT/hplt_bert_base_si")
print([b.name for b in out.branches])
```
## Cite us
```bibtex
@inproceedings{samuel-etal-2023-trained,
title = "Trained on 100 million words and still in shape: {BERT} meets {B}ritish {N}ational {C}orpus",
author = "Samuel, David and
Kutuzov, Andrey and
{\O}vrelid, Lilja and
Velldal, Erik",
editor = "Vlachos, Andreas and
Augenstein, Isabelle",
booktitle = "Findings of the Association for Computational Linguistics: EACL 2023",
month = may,
year = "2023",
address = "Dubrovnik, Croatia",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-eacl.146",
doi = "10.18653/v1/2023.findings-eacl.146",
pages = "1954--1974"
})
```
```bibtex
@inproceedings{de-gibert-etal-2024-new-massive,
title = "A New Massive Multilingual Dataset for High-Performance Language Technologies",
author = {de Gibert, Ona and
Nail, Graeme and
Arefyev, Nikolay and
Ba{\~n}{\'o}n, Marta and
van der Linde, Jelmer and
Ji, Shaoxiong and
Zaragoza-Bernabeu, Jaume and
Aulamo, Mikko and
Ram{\'\i}rez-S{\'a}nchez, Gema and
Kutuzov, Andrey and
Pyysalo, Sampo and
Oepen, Stephan and
Tiedemann, J{\"o}rg},
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.100",
pages = "1116--1128",
abstract = "We present the HPLT (High Performance Language Technologies) language resources, a new massive multilingual dataset including both monolingual and bilingual corpora extracted from CommonCrawl and previously unused web crawls from the Internet Archive. We describe our methods for data acquisition, management and processing of large corpora, which rely on open-source software tools and high-performance computing. Our monolingual collection focuses on low- to medium-resourced languages and covers 75 languages and a total of {\mbox{$\approx$}} 5.6 trillion word tokens de-duplicated on the document level. Our English-centric parallel corpus is derived from its monolingual counterpart and covers 18 language pairs and more than 96 million aligned sentence pairs with roughly 1.4 billion English tokens. The HPLT language resources are one of the largest open text corpora ever released, providing a great resource for language modeling and machine translation training. We publicly release the corpora, the software, and the tools used in this work.",
}
```
|
JunHotate/blockassist-bc-mighty_foxy_bobcat_1754910084
|
JunHotate
| 2025-08-11T11:02:20Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mighty foxy bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T11:02:10Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mighty foxy bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
BioWhere/georef-v1
|
BioWhere
| 2025-08-11T10:56:34Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2025-08-11T10:34:39Z |
---
base_model: mistralai/Mistral-7B-v0.1
library_name: peft
---
# Model Card for Model ID
Fine-tuned mistral-7B model for determining coordinates of New Zealand Biota from text
|
rawsun00001/banking-sms-parser-v8-fixed
|
rawsun00001
| 2025-08-11T10:49:59Z | 0 | 0 | null |
[
"safetensors",
"text-generation",
"financial-nlp",
"sms-parsing",
"transaction-extraction",
"en",
"dataset:dshah1612/sms-data",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-11T10:49:54Z |
---
license: apache-2.0
tags:
- text-generation
- financial-nlp
- sms-parsing
- transaction-extraction
language:
- en
datasets:
- dshah1612/sms-data
---
# Banking SMS Transaction Parser V8 - Fixed Parsing
Enhanced model with robust JSON parsing for banking SMS transaction extraction.
## Features
- Uses real Kaggle SMS data
- Robust JSON extraction with multiple fallback strategies
- Enhanced transaction detection
- Smart categorization system
- Fixed parsing issues from previous versions
## Usage
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754909015
|
IvanJAjebu
| 2025-08-11T10:44:45Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:44:29Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754908622
|
ggozzy
| 2025-08-11T10:38:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:38:30Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Nullifier00/blockassist-bc-slimy_lanky_bison_1754907289
|
Nullifier00
| 2025-08-11T10:37:19Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"slimy lanky bison",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:37:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- slimy lanky bison
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Dalfaxy/mt0_xl_french_detox_v3-beam-groups
|
Dalfaxy
| 2025-08-11T10:34:51Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"orpo",
"trl",
"arxiv:2403.07691",
"base_model:bigscience/mt0-xl",
"base_model:finetune:bigscience/mt0-xl",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T09:38:44Z |
---
base_model: bigscience/mt0-xl
library_name: transformers
model_name: mt0_xl_french_detox_v3-beam-groups
tags:
- generated_from_trainer
- orpo
- trl
licence: license
---
# Model Card for mt0_xl_french_detox_v3-beam-groups
This model is a fine-tuned version of [bigscience/mt0-xl](https://huggingface.co/bigscience/mt0-xl).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Dalfaxy/mt0_xl_french_detox_v3-beam-groups", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with ORPO, a method introduced in [ORPO: Monolithic Preference Optimization without Reference Model](https://huggingface.co/papers/2403.07691).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.6.0+cu124
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite ORPO as:
```bibtex
@article{hong2024orpo,
title = {{ORPO: Monolithic Preference Optimization without Reference Model}},
author = {Jiwoo Hong and Noah Lee and James Thorne},
year = 2024,
eprint = {arXiv:2403.07691}
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
lovedheart/GLM-4.5-Air-GGUF-IQ1_M
|
lovedheart
| 2025-08-11T10:33:14Z | 936 | 1 | null |
[
"gguf",
"base_model:zai-org/GLM-4.5-Air",
"base_model:quantized:zai-org/GLM-4.5-Air",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-08-05T15:42:13Z |
---
license: mit
base_model:
- zai-org/GLM-4.5-Air
---
Use unsloth BF16 GGUF to quantize IQ1_M/S. Blk.46 is not being used in llama.cpp therefore the weights of blk.46 are quantized to TQ1_0 to have minimum memory allocation.
---
Added MXFP4 version:
1) MXFP4: Embedding, Output are kept with Q6_K. The attn layers use IQ4_XS. All ffn expert layers including shared experts are quantized to SOTA MXFP4.
2) MXFP4 Max: Embedding, Output and attn layers are kept with Q6_K. First layer uses full precision. The rest of ffn expert layers are quantized to SOTA MXFP4. The shared experts weights keep BF16.
|
frankcholula/ppo-BipedalWalker-v3
|
frankcholula
| 2025-08-11T10:30:25Z | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"BipedalWalker-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-08-11T10:00:18Z |
---
library_name: stable-baselines3
tags:
- BipedalWalker-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BipedalWalker-v3
type: BipedalWalker-v3
metrics:
- type: mean_reward
value: 271.87 +/- 2.08
name: mean_reward
verified: false
---
# **PPO** Agent playing **BipedalWalker-v3**
This is a trained model of a **PPO** agent playing **BipedalWalker-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo ppo --env BipedalWalker-v3 -orga frankcholula -f logs/
python -m rl_zoo3.enjoy --algo ppo --env BipedalWalker-v3 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo ppo --env BipedalWalker-v3 -orga frankcholula -f logs/
python -m rl_zoo3.enjoy --algo ppo --env BipedalWalker-v3 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo ppo --env BipedalWalker-v3 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo ppo --env BipedalWalker-v3 -f logs/ -orga frankcholula
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('clip_range', 0.18),
('ent_coef', 0.0),
('gae_lambda', 0.95),
('gamma', 0.999),
('learning_rate', 0.0003),
('n_envs', 32),
('n_epochs', 10),
('n_steps', 2048),
('n_timesteps', 5000000.0),
('normalize', True),
('policy', 'MlpPolicy'),
('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
tushar0088/blockassist-bc-vocal_tenacious_prawn_1754908015
|
tushar0088
| 2025-08-11T10:28:21Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vocal tenacious prawn",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:28:05Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vocal tenacious prawn
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kumoooo/blockassist-bc-aquatic_restless_camel_1754906959
|
kumoooo
| 2025-08-11T10:17:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic restless camel",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:16:58Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic restless camel
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1754907378
|
kapalbalap
| 2025-08-11T10:17:18Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:16:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jahyungu/AMD-OLMo-1B-SFT_LeetCodeDataset
|
jahyungu
| 2025-08-11T10:15:53Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"olmo",
"text-generation",
"generated_from_trainer",
"conversational",
"base_model:amd/AMD-OLMo-1B-SFT",
"base_model:finetune:amd/AMD-OLMo-1B-SFT",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T10:01:21Z |
---
library_name: transformers
license: apache-2.0
base_model: amd/AMD-OLMo-1B-SFT
tags:
- generated_from_trainer
model-index:
- name: AMD-OLMo-1B-SFT_LeetCodeDataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# AMD-OLMo-1B-SFT_LeetCodeDataset
This model is a fine-tuned version of [amd/AMD-OLMo-1B-SFT](https://huggingface.co/amd/AMD-OLMo-1B-SFT) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.0
|
JunHotate/blockassist-bc-mighty_foxy_bobcat_1754906596
|
JunHotate
| 2025-08-11T10:04:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"mighty foxy bobcat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:04:03Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- mighty foxy bobcat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
roeker/blockassist-bc-quick_wiry_owl_1754906556
|
roeker
| 2025-08-11T10:03:37Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:03:27Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
DevQuasar/ibm-granite.granite-guardian-3.3-8b-GGUF
|
DevQuasar
| 2025-08-11T10:02:05Z | 0 | 0 | null |
[
"gguf",
"text-generation",
"base_model:ibm-granite/granite-guardian-3.3-8b",
"base_model:quantized:ibm-granite/granite-guardian-3.3-8b",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-08-11T08:56:29Z |
---
base_model:
- ibm-granite/granite-guardian-3.3-8b
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [ibm-granite/granite-guardian-3.3-8b](https://huggingface.co/ibm-granite/granite-guardian-3.3-8b)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
nilli2038/blockassist-bc-gentle_gregarious_mouse_1754906425
|
nilli2038
| 2025-08-11T10:01:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle gregarious mouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T10:00:51Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle gregarious mouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jiteshsureka/gemma-3-1b-ecomm-intent
|
jiteshsureka
| 2025-08-11T09:57:48Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"arxiv:1910.09700",
"base_model:unsloth/gemma-3-1b-it-unsloth-bnb-4bit",
"region:us"
] | null | 2025-08-11T09:50:39Z |
---
base_model: unsloth/gemma-3-1b-it-unsloth-bnb-4bit
library_name: peft
tags:
- base_model:adapter:unsloth/gemma-3-1b-it-unsloth-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
pietro0hz/blockassist-bc-ferocious_toothy_tortoise_1754906129
|
pietro0hz
| 2025-08-11T09:57:10Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"ferocious toothy tortoise",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T09:56:44Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- ferocious toothy tortoise
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ggozzy/blockassist-bc-stubby_yapping_mandrill_1754905625
|
ggozzy
| 2025-08-11T09:48:32Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"stubby yapping mandrill",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T09:48:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- stubby yapping mandrill
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
smoorsmith/Dream-v0-Instruct-7B
|
smoorsmith
| 2025-08-11T09:47:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"Dream",
"feature-extraction",
"text-generation",
"conversational",
"custom_code",
"license:apache-2.0",
"region:us"
] |
text-generation
| 2025-08-11T09:40:29Z |
---
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
---
# Dream-v0-Instruct-7B
This is the instruct model of Dream 7B, which is an open diffusion large language model with top-tier performance.
More details about the model and usage can be found in the blog and github bellow:
- **Blog:** https://hkunlp.github.io/blog/2025/dream/
- **Github:** https://github.com/HKUNLP/Dream
|
kapalbalap/blockassist-bc-peaceful_wary_owl_1754905408
|
kapalbalap
| 2025-08-11T09:44:36Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"peaceful wary owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T09:44:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- peaceful wary owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
nilli2038/blockassist-bc-gentle_gregarious_mouse_1754904934
|
nilli2038
| 2025-08-11T09:36:02Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"gentle gregarious mouse",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T09:35:59Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- gentle gregarious mouse
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp_pnas_layer_24_4_all_37_0.0001_2560_1
|
winnieyangwannan
| 2025-08-11T09:29:11Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T01:51:29Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp_pnas_layer_30_4_all_37_0.0001_2560_1
|
winnieyangwannan
| 2025-08-11T09:27:21Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T01:51:30Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Nullifier00/blockassist-bc-slimy_lanky_bison_1754902976
|
Nullifier00
| 2025-08-11T09:26:27Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"slimy lanky bison",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T09:26:04Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- slimy lanky bison
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp_pnas_layer_18_4_all_37_0.0001_1920_1
|
winnieyangwannan
| 2025-08-11T09:25:16Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T01:49:20Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp_pnas_layer_20_4_all_37_0.0001_1280_1
|
winnieyangwannan
| 2025-08-11T09:22:42Z | 6 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T10:08:25Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_Llama-3.1-8B-Instruct_mlp_pnas_layer_18_4_all_37_0.0001_640_1
|
winnieyangwannan
| 2025-08-11T09:20:40Z | 2 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T10:06:13Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_dpo_Llama-3.1-8B-Instruct_lora_8_lr_0.0001_beta_0.05_6400_all_37_epoch_1_layer_all
|
winnieyangwannan
| 2025-08-11T09:18:07Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T09:12:40Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
winnieyangwannan/entity_dpo_Llama-3.1-8B-Instruct_lora_8_lr_0.0001_beta_0.05_5120_all_37_epoch_1_layer_all
|
winnieyangwannan
| 2025-08-11T09:17:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"dpo",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T09:12:41Z |
---
library_name: transformers
tags:
- trl
- dpo
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Bearrr310/sft_verl_0811unv
|
Bearrr310
| 2025-08-11T09:09:07Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"unsloth",
"trl",
"sft",
"dataset:sft_verl_0811unv",
"endpoints_compatible",
"region:us"
] | null | 2025-08-11T09:07:46Z |
---
base_model: unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit
datasets: sft_verl_0811unv
library_name: transformers
model_name: sft_verl_0811unv
tags:
- generated_from_trainer
- unsloth
- trl
- sft
licence: license
---
# Model Card for sft_verl_0811unv
This model is a fine-tuned version of [unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-1.5b-instruct-unsloth-bnb-4bit) on the [sft_verl_0811unv](https://huggingface.co/datasets/sft_verl_0811unv) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Bearrr310/sft_verl_0811unv", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.18.2
- Transformers: 4.52.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
michaelcpage345/blockassist-bc-miniature_deadly_anteater_1754901364
|
michaelcpage345
| 2025-08-11T09:05:43Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"miniature deadly anteater",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T09:05:38Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- miniature deadly anteater
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
AntResearchNLP/ViLaSR
|
AntResearchNLP
| 2025-08-11T09:04:16Z | 17,531 | 8 | null |
[
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"en",
"dataset:AntResearchNLP/ViLaSR-data",
"arxiv:2506.09965",
"base_model:Qwen/Qwen2.5-VL-7B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-VL-7B-Instruct",
"region:us"
] |
image-text-to-text
| 2025-06-01T15:56:01Z |
---
datasets:
- AntResearchNLP/ViLaSR-data
language:
- en
base_model:
- Qwen/Qwen2.5-VL-7B-Instruct
pipeline_tag: image-text-to-text
---
This repository contains the ViLaSR-7B model as presented in [Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing](https://arxiv.org/abs/2506.09965).
Please refer to the code https://github.com/AntResearchNLP/ViLaSR.
```
@misc{wu2025reinforcingspatialreasoningvisionlanguage,
title={Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing},
author={Junfei Wu and Jian Guan and Kaituo Feng and Qiang Liu and Shu Wu and Liang Wang and Wei Wu and Tieniu Tan},
year={2025},
eprint={2506.09965},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2506.09965},
}
```
|
Kumo2023/maan2
|
Kumo2023
| 2025-08-11T08:57:17Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-08-11T07:51:30Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Maan2
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/Kumo2023/maan2/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('Kumo2023/maan2', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 6000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/Kumo2023/maan2/discussions) to add images that show off what you’ve made with this LoRA.
|
LeonardoBenitez/temp_sparse_lora_distillation_gas_pump_by_truck
|
LeonardoBenitez
| 2025-08-11T08:53:14Z | 0 | 0 | null |
[
"tensorboard",
"model-index",
"region:us"
] | null | 2025-05-25T20:13:28Z |
---
hyperparameters:
lora_r: 4
lora_alpha: 4.0
lora_dropout: 0.1
is_lora_negated: true
overwritting_concept: garbage_truck
model_name_or_path: stable-diffusion-v1-5/stable-diffusion-v1-5
tokenizer_name: null
dataset_forget_name: ../SD_lora_munba/assets/imagenette_splits/n03425413/train_forget
dataset_retain_name: ../SD_lora_munba/assets/imagenette_splits/n03425413/train_retain
dataset_forget_config_name: null
dataset_retain_config_name: null
image_column: image
caption_column: text
validation_prompt: Picture of a gas pump
num_validation_images: 1
validation_epochs: 1
resolution: 512
center_crop: false
random_flip: false
max_train_samples: null
dataloader_num_workers: 2
prediction_type: null
do_train: true
do_eval: false
per_device_train_batch_size: 1
gradient_accumulation_steps: 128
num_train_epochs: 30
learning_rate: 0.0002
lr_scheduler_type: cosine
output_dir: assets/models/sparse_lora_distillation_gas_pump_by_truck_3
logging_dir: logs
logging_steps: 20
save_strategy: epoch
save_total_limit: 2
seed: 42
should_log: true
local_rank: -1
device: cuda
n_gpu: 1
gradient_checkpointing: false
enable_xformers_memory_efficient_attention: false
mixed_precision: fp16
allow_tf32: false
use_8bit_adam: false
report_to: tensorboard
cache_dir: null
hub_token: null
hub_model_id: LeonardoBenitez/temp_sparse_lora_distillation_gas_pump_by_truck
revision: null
variant: null
compute_gradient_conflict: false
compute_runtimes: true
max_train_steps: 210
lr_warmup_steps: 0
adam_beta1: 0.9
adam_beta2: 0.999
adam_weight_decay: 0.01
adam_epsilon: 1.0e-08
max_grad_norm: 1.0
checkpointing_steps: 500
checkpoints_total_limit: null
resume_from_checkpoint: null
noise_offset: 0.0
model-index:
- name: LeonardoBenitez/temp_sparse_lora_distillation_gas_pump_by_truck
results:
- task:
type: text-to-image
dataset:
name: Forget set
type: inline-prompts
metrics:
- type: clip
value: 30.6926403427124
name: ForgetSet clip score of original model mean (~↑)
- type: clip
value: 2.766679328256628
name: ForgetSet clip score of original model std (~↓)
- type: clip
value: 25.28403902053833
name: ForgetSet clip score of learned model mean (~↑)
- type: clip
value: 3.7184093879987956
name: ForgetSet clip score of learned model std (~↓)
- type: clip
value: 30.318425407409666
name: ForgetSet clip score of unlearned model mean (↓)
- type: clip
value: 2.750741136251685
name: ForgetSet clip score of unlearned model std (~↓)
- type: clip
value: -5.034386386871338
name: ForgetSet clip score difference between learned and unlearned mean (↑)
- type: clip
value: 4.0031765235744246
name: ForgetSet clip score difference between learned and unlearned std (~↓)
- type: clip
value: 0.37421493530273436
name: ForgetSet clip score difference between original and unlearned mean (↑)
- type: clip
value: 3.254103831719402
name: ForgetSet clip score difference between original and unlearned std (~↓)
- type: clip
value: 29.053246574401854
name: RetainSet clip score of original model mean (~↑)
- type: clip
value: 3.850731326418032
name: RetainSet clip score of original model std (~↓)
- type: clip
value: 29.09435615539551
name: RetainSet clip score of learned model mean (~↓)
- type: clip
value: 4.266677393779429
name: RetainSet clip score of learned model std (~↓)
- type: clip
value: 29.388720207214355
name: RetainSet clip score of unlearned model mean (↑)
- type: clip
value: 3.6263604601791797
name: RetainSet clip score of unlearned model std (~↓)
- type: clip
value: -0.29436405181884767
name: RetainSet clip score difference between learned and unlearned mean (↓)
- type: clip
value: 2.875361910777803
name: RetainSet clip score difference between learned and unlearned std (~↓)
- type: clip
value: -0.3354736328125
name: RetainSet clip score difference between original and unlearned mean (↓)
- type: clip
value: 3.2409042528503273
name: RetainSet clip score difference between original and unlearned std (~↓)
- type: runtime
value: 5.32175062974294
name: Inference latency seconds mean (↓)
- type: runtime
value: 0.28290616306756466
name: Inference latency seconds std (~↓)
- task:
type: text-to-image
dataset:
name: ../SD_lora_munba/assets/imagenette_splits/n03425413/train_forget (forget)
and ../SD_lora_munba/assets/imagenette_splits/n03425413/train_retain (retain)
sets
type: forget-and-retain-together
metrics:
- type: runtime
value: 3.559974193572998
name: Runtime init seconds (~↓)
- type: runtime
value: 11.590957403182983
name: Runtime data loading seconds (~↓)
- type: runtime
value: 20448.4827272892
name: Runtime training seconds (↓)
- type: runtime
value: 2367.787192106247
name: Runtime eval seconds (~↓)
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA text2image fine-tuning - none
These are LoRA adaption weights for stable-diffusion-v1-5/stable-diffusion-v1-5.
The weights were fine-tuned for forgetting ../PEM_composition_img_gen/assets/imagenette_splits/n03888257/train_forget dataset, while retaining ../PEM_composition_img_gen/assets/imagenette_splits/n03888257/train_retain.
You can find some example images in the following.







## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model]
|
kayacrypto/blockassist-bc-thriving_barky_wolf_1754901900
|
kayacrypto
| 2025-08-11T08:47:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thriving barky wolf",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T08:46:47Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thriving barky wolf
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
isomje/gemma3-4b-it-latin-ocr
|
isomje
| 2025-08-11T08:42:28Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-it",
"base_model:finetune:google/gemma-3-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-08-09T18:56:23Z |
---
base_model: google/gemma-3-4b-it
library_name: transformers
model_name: gemma3-4b-it-latin-ocr
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma3-4b-it-latin-ocr
This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="isomje/gemma3-4b-it-latin-ocr", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.0
- Pytorch: 2.8.0+cu129
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
FrAnKu34t23/Test
|
FrAnKu34t23
| 2025-08-11T08:40:31Z | 0 | 0 |
peft
|
[
"peft",
"tensorboard",
"safetensors",
"base_model:adapter:distilgpt2",
"lora",
"transformers",
"text-generation",
"arxiv:1910.09700",
"base_model:distilbert/distilgpt2",
"base_model:adapter:distilbert/distilgpt2",
"region:us"
] |
text-generation
| 2025-08-11T08:40:27Z |
---
base_model: distilgpt2
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:distilgpt2
- lora
- transformers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
Grogun/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-clawed_wily_manatee
|
Grogun
| 2025-08-11T08:39:46Z | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am clawed_wily_manatee",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-10T15:03:47Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am clawed_wily_manatee
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hitrax/blockassist-bc-timid_toothy_meerkat_1754901425
|
hitrax
| 2025-08-11T08:39:09Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"timid toothy meerkat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T08:38:45Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- timid toothy meerkat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
cpatonn/II-Search-4B-AWQ-8bit
|
cpatonn
| 2025-08-11T08:35:59Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Intelligent-Internet/II-Search-4B",
"base_model:quantized:Intelligent-Internet/II-Search-4B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"compressed-tensors",
"region:us"
] |
text-generation
| 2025-08-11T06:52:50Z |
---
base_model:
- Intelligent-Internet/II-Search-4B
pipeline_tag: text-generation
library_name: transformers
---

# II-Search-4B
<aside>
A 4B parameter language model specialized in information seeking, multi-hop reasoning, and web-integrated search, achieving state-of-the-art performance among models of similar size.
</aside>


## Model Description
II-Search-4B is a 4B parameter language model based on Qwen3-4B, fine-tuned specifically for information seeking tasks and web-integrated reasoning. It excels at complex multi-hop information retrieval, fact verification, and comprehensive report generation.
### Key Features
- Enhanced tool usage for web search and webpage visits
- Multi-hop reasoning capabilities with sophisticated planning
- Verified information retrieval with cross-checking
- Strong performance on factual QA benchmarks
- Comprehensive report generation for research queries
## Training Methodology
Our training process consisted of three key phases:
### Phase 1: Tool Call Ability Stimulation
We used a distillation approach from larger models (Qwen3-235B) to generate reasoning paths with function calling on multi-hop datasets. This established the base capabilities for tool use.
### Phase 2: Reasoning Improvement
We addressed initial limitations by:
- Creating synthetic problems requiring more reasoning turns, inspired by Random Walk algorithm
- Improving reasoning thought patterns for more efficient and cleaner reasoning paths
### Phase 3: Rejection Sampling & Report Generation
We applied:
- Filtering to keep only high-quality reasoning traces (correct answers with proper reasoning)
- STORM-inspired techniques to enhance comprehensive report generation
### Phase 4: Reinforcement Learning
We trained the model using reinforcement learning
- Used dataset: [dgslibisey/MuSiQue](https://huggingface.co/datasets/dgslibisey/MuSiQue)
- Incorporated our in-house search database (containing Wiki data, Fineweb data, and ArXiv data)
## Performance
| **Benchmark** | **Qwen3-4B** | **Jan-4B** | **WebSailor-3B** | **II-Search-4B** |
| --- | --- | --- | --- | --- |
| OpenAI/SimpleQA | 76.8 | 80.1 | 81.8 | 91.8 |
| Google/Frames | 30.7 | 24.8 | 34.0 | 67.5 |
| Seal_0 | 6.31 | 2.7 | 1.8 | 22.5 |
### Tool Usage Comparison
**Simple QA (SerpDev)**
| | **Qwen3-4B** | **Jan-4B** | **WebSailor-3B** | **II-Search-4B** |
| --- | --- | --- | --- | --- |
| # Search | 1.0 | 0.9 | 2.1 | 2.2 |
| # Visit | 0.1 | 1.9 | 6.4 | 3.5 |
| # Total Tools | 1.1 | 2.8 | 8.5 | 5.7 |
All benchmark traces from models can be found at: https://huggingface.co/datasets/II-Vietnam/Inspect-Search-Models-Benchmarking-Result
## Intended Use
II-Search-4B is designed for:
- Information seeking and factual question answering
- Research assistance and comprehensive report generation
- Fact verification and evidence-based reasoning
- Educational and research applications requiring factual accuracy
## Usage
To deploy and interact with the II-Search-4B model effectively, follow these options:
1. Serve the model using vLLM or SGLang
Use the following command to serve the model with vLLM (adjust parameters as needed for your hardware setup):
```bash
vllm serve Intelligent-Internet/II-Search-4B --served-model-name II-Search-4B --tensor-parallel-size 8 --enable-reasoning --reasoning-parser deepseek_r1 --rope-scaling '{"rope_type":"yarn","factor":1.5,"original_max_position_embeddings":98304}' --max-model-len 131072
```
This configuration enables distributed tensor parallelism across 8 GPUs, reasoning capabilities, custom RoPE scaling for extended context, and a maximum context length of 131,072 tokens.
2. Integrate web_search and web_visit tools
Equip the served model with web_search and web_visit tools to enable internet-aware functionality. Alternatively, use a middleware like MCP for tool integration—see this example repository: https://github.com/hoanganhpham1006/mcp-server-template.
## Host on macOS with MLX for local use
As an alternative for Apple Silicon users, host the quantized [II-Search-4B-MLX](https://huggingface.co/Intelligent-Internet/II-Search-4B-MLX) version on your Mac. Then, interact with it via user-friendly interfaces like LM Studio or Ollama Desktop.
## Recommended Generation Parameters
```python
generate_cfg = {
'top_k': 20,
'top_p': 0.95,
'temperature': 0.6,
'repetition_penalty': 1.1,
'max_tokens': 2048
}
```
- For a query that you need to find a short and accurate answer. Add the following phrase: "\n\nPlease reason step-by-step and put the final answer within \\\\boxed{}."
## Citation
```
@misc{II-Search-4B,
author = {Intelligent Internet},
title = {II-Search-4B: Information Seeking and Web-Integrated Reasoning LLM},
year = {2025},
publisher = {Hugging Face},
journal = {Hugging Face Hub},
howpublished = {\url{https://huggingface.co/II-Vietnam/II-Search-4B}},
}
```
|
AXERA-TECH/satrn
|
AXERA-TECH
| 2025-08-11T08:35:13Z | 3 | 0 | null |
[
"onnx",
"Transformer",
"ONNX",
"ocr",
"mmocr",
"satrn",
"en",
"license:bsd-3-clause-clear",
"region:us"
] | null | 2025-06-11T03:08:44Z |
---
license: bsd-3-clause-clear
language:
- en
tags:
- Transformer
- ONNX
- ocr
- mmocr
- satrn
---
# satrn
[original repo](https://github.com/open-mmlab/mmocr/blob/main/configs/textrecog/satrn/README.md)
## Convert tools links:
For those who are interested in model conversion, you can try to export onnx or axmodel through
[satrn.axera](https://github.com/AXERA-TECH/satrn.axera)
## Installation
```
conda create -n open-mmlab python=3.8 pytorch=1.10 cudatoolkit=11.3 torchvision -c pytorch -y
conda activate open-mmlab
pip3 install openmim
git clone https://github.com/open-mmlab/mmocr.git
cd mmocr
mim install -e .
```
## Support Platform
- AX650
- [M4N-Dock(爱芯派Pro)](https://wiki.sipeed.com/hardware/zh/maixIV/m4ndock/m4ndock.html)
- [M.2 Accelerator card](https://axcl-docs.readthedocs.io/zh-cn/latest/doc_guide_hardware.html)
The speed measurements(under different NPU configurations ) of the two parts of SATRN:
(1) backbone+encoder
(2) decoder
||backbone+encoder(ms)|decoder(ms)|
|--|--|--|
|NPU1|20.494|2.648|
|NPU2|9.785|1.504|
|NPU3|6.085|1.384|
## How to use
Download all files from this repository to the device
```
.
├── axmodel
│ ├── backbone_encoder.axmodel
│ └── decoder.axmodel
├── demo_text_recog.jpg
├── onnx
│ ├── satrn_backbone_encoder.onnx
│ └── satrn_decoder_sim.onnx
├── README.md
├── run_axmodel.py
├── run_model.py
└── run_onnx.py
```
### python env requirement
#### 1. pyaxengine
https://github.com/AXERA-TECH/pyaxengine
```
wget https://github.com/AXERA-TECH/pyaxengine/releases/download/0.1.1rc0/axengine-0.1.1-py3-none-any.whl
pip install axengine-0.1.1-py3-none-any.whl
```
#### 2. satrn
[satrn installation](https://github.com/open-mmlab/mmocr/tree/main?tab=readme-ov-file#installation)
#### Inference onnxmodel
```
python run_onnx.py
```
input:

output:
```
pred_text: STAR
score: [0.9384028315544128, 0.9574984908103943, 0.9993689656257629, 0.9994958639144897]
```
#### Inference with AX650 Host
check the [reference](https://github.com/AXERA-TECH/satrn.axera) for more information
|
koloni/blockassist-bc-deadly_graceful_stingray_1754899754
|
koloni
| 2025-08-11T08:34:39Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"deadly graceful stingray",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T08:34:35Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- deadly graceful stingray
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aXsalll/blockassist-bc-chattering_galloping_ape_1754900128
|
aXsalll
| 2025-08-11T08:31:57Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"chattering galloping ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T08:31:50Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- chattering galloping ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
tamewild/4b_v43_merged_e8
|
tamewild
| 2025-08-11T08:31:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T08:29:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
roeker/blockassist-bc-quick_wiry_owl_1754901009
|
roeker
| 2025-08-11T08:31:40Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"quick wiry owl",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T08:31:01Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- quick wiry owl
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
shiimi/wav2vec2
|
shiimi
| 2025-08-11T08:28:34Z | 0 | 0 | null |
[
"pytorch",
"wav2vec2",
"generated_from_trainer",
"dataset:common_voice_17_0",
"license:apache-2.0",
"region:us"
] | null | 2025-08-11T07:41:48Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_17_0
model-index:
- name: wav2vec2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_17_0 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.13.3
|
qavurdagli/blockassist-bc-bristly_sprightly_vulture_1754899382
|
qavurdagli
| 2025-08-11T08:27:50Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"bristly sprightly vulture",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T08:27:40Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- bristly sprightly vulture
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
surxjj/meta-llama-3.1-8b-lora
|
surxjj
| 2025-08-11T08:27:38Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"arxiv:1910.09700",
"region:us"
] |
text-generation
| 2025-08-11T08:25:57Z |
---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
- lora
- sft
- transformers
- trl
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.0
|
fengpeisheng1/Qwen3-4B-Instruct-2507-20250808-233922-0-IQ4_NL-GGUF
|
fengpeisheng1
| 2025-08-11T08:14:57Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"model-merging",
"mergekit",
"lazymergekit",
"qwen3",
"4b",
"text-generation",
"causal-lm",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:Idavidrein/gpqa",
"base_model:ParrotRouter/Qwen3-4B-Instruct-2507-20250808-233922-0",
"base_model:merge:ParrotRouter/Qwen3-4B-Instruct-2507-20250808-233922-0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] |
text-generation
| 2025-08-11T08:14:43Z |
---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- merge
- model-merging
- mergekit
- lazymergekit
- qwen3
- 4b
- text-generation
- causal-lm
- llama-cpp
- gguf-my-repo
datasets:
- Idavidrein/gpqa
metrics:
- accuracy
base_model: ParrotRouter/Qwen3-4B-Instruct-2507-20250808-233922-0
base_model_relation: merge
model-index:
- name: qwen3-4b-merged---configuration-1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (Massive Multitask Language Understanding)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: accuracy
value: 72.51
name: MMLU (5-shot)
verified: false
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (Graduate-level Physics Q&A)
type: Idavidrein/gpqa
config: gpqa_diamond
split: test
args:
num_few_shot: 0
metrics:
- type: accuracy
value: 45.45
name: GPQA Diamond (0-shot)
verified: false
---
# fengpeisheng1/Qwen3-4B-Instruct-2507-20250808-233922-0-IQ4_NL-GGUF
This model was converted to GGUF format from [`ParrotRouter/Qwen3-4B-Instruct-2507-20250808-233922-0`](https://huggingface.co/ParrotRouter/Qwen3-4B-Instruct-2507-20250808-233922-0) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/ParrotRouter/Qwen3-4B-Instruct-2507-20250808-233922-0) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo fengpeisheng1/Qwen3-4B-Instruct-2507-20250808-233922-0-IQ4_NL-GGUF --hf-file qwen3-4b-instruct-2507-20250808-233922-0-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo fengpeisheng1/Qwen3-4B-Instruct-2507-20250808-233922-0-IQ4_NL-GGUF --hf-file qwen3-4b-instruct-2507-20250808-233922-0-iq4_nl-imat.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo fengpeisheng1/Qwen3-4B-Instruct-2507-20250808-233922-0-IQ4_NL-GGUF --hf-file qwen3-4b-instruct-2507-20250808-233922-0-iq4_nl-imat.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo fengpeisheng1/Qwen3-4B-Instruct-2507-20250808-233922-0-IQ4_NL-GGUF --hf-file qwen3-4b-instruct-2507-20250808-233922-0-iq4_nl-imat.gguf -c 2048
```
|
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1754899766
|
IvanJAjebu
| 2025-08-11T08:10:55Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"thorny slender capybara",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T08:10:23Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- thorny slender capybara
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
winnieyangwannan/entity_sft_Llama-3.1-8B-Instruct_lora_8_lr_0.0001_10240_all_37_epoch_1_layer_all
|
winnieyangwannan
| 2025-08-11T08:08:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T08:03:18Z |
---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aXsalll/blockassist-bc-chattering_galloping_ape_1754898454
|
aXsalll
| 2025-08-11T08:06:23Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"chattering galloping ape",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T08:06:17Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- chattering galloping ape
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
jeongseokoh/Llama3.1-8B-LatentRAG-batch_20st-og
|
jeongseokoh
| 2025-08-11T08:02:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-11T07:55:16Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
airdroptails9/blockassist-bc-skilled_fluffy_salmon_1754888666
|
airdroptails9
| 2025-08-11T07:56:06Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"skilled fluffy salmon",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T07:55:42Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- skilled fluffy salmon
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
hitrax/blockassist-bc-timid_toothy_meerkat_1754898593
|
hitrax
| 2025-08-11T07:53:00Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"timid toothy meerkat",
"arxiv:2504.07091",
"region:us"
] | null | 2025-08-11T07:52:37Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- timid toothy meerkat
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.