Smol-Hub-tldr

Model visualization

This model is a fine-tuned version of HuggingFaceTB/SmolLM2-360M. The model is focused on generating concise, one-sentence summaries of model and dataset cards from the Hugging Face Hub. These summaries are intended to be used for:

The model was trained using supervised fine-tuning (SFT) with TRL.

A meta example of a summary generated for this card:

This model is a fine-tuned version of SmolLM2-360M for generating concise, one-sentence summaries of model and dataset cards from the Hugging Face Hub.

Intended Use

The model is designed to generate brief, informative summaries of:

  • Model cards: Focusing on key capabilities and characteristics
  • Dataset cards: Capturing essential dataset characteristics and purposes

Training Data

The model was trained on:

  • Model card summaries generated by Llama 3.3 70B
  • Dataset card summaries generated by Llama 3.3 70B

Usage

Using the chat template when using the model in inference is recommended. Additionally, you should prepend either <MODEL_CARD> or <DATASET_CARD> to the start of the card you want to summarize. The training data used the body of the model or dataset card, i.e., the part after the YAML, so you will likely get better results only by passing this part of the card.

I have so far found that a low temperature of 0.4 generates better results.

Example:

from transformers import AutoModelForCausalLM, AutoTokenizer
from huggingface_hub import ModelCard

card = ModelCard.load("davanstrien/Smol-Hub-tldr")

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("davanstrien/Smol-Hub-tldr")
model = AutoModelForCausalLM.from_pretrained("davanstrien/Smol-Hub-tldr")

# Format input according to the chat template
messages = [{"role": "user", "content": f"<MODEL_CARD>{card.text}"}]
# Encode with the chat template
inputs = tokenizer.apply_chat_template(
    messages, add_generation_prompt=True, return_tensors="pt"
)

# Generate with stop tokens
outputs = model.generate(
    inputs,
    max_new_tokens=60,
    pad_token_id=tokenizer.pad_token_id,
    eos_token_id=tokenizer.eos_token_id,
    temperature=0.4,
    do_sample=True,
)

input_length = inputs.shape[1]
response = tokenizer.decode(outputs[0][input_length:], skip_special_tokens=False)

# Extract just the summary part
summary = response.split("<CARD_SUMMARY>")[-1].split("</CARD_SUMMARY>")[0]
print(summary)
>>> "The Smol-Hub-tldr model is a fine-tuned version of SmolLM2-360M designed to generate concise, one-sentence summaries of model and dataset cards from the Hugging Face Hub."

The model currently should close its summary with a </CARD_SUMMARY> (cooking some more with this...), so you can also use this as a stopping criterion when using pipeline inference.

from transformers import pipeline, StoppingCriteria, StoppingCriteriaList
import torch


class StopOnTokens(StoppingCriteria):
    def __init__(self, tokenizer, stop_token_ids):
        self.stop_token_ids = stop_token_ids
        self.tokenizer = tokenizer

    def __call__(
        self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs
    ) -> bool:
        for stop_id in self.stop_token_ids:
            if input_ids[0][-1] == stop_id:
                return True
        return False


# Initialize pipeline
pipe = pipeline("text-generation", "davanstrien/Smol-Hub-tldr")
tokenizer = pipe.tokenizer

# Get the token IDs for stopping
stop_token_ids = [
    tokenizer.encode("</CARD_SUMMARY>", add_special_tokens=True)[-1],
    tokenizer.eos_token_id,
]

# Create stopping criteria
stopping_criteria = StoppingCriteriaList([StopOnTokens(tokenizer, stop_token_ids)])

# Generate with stopping criteria
response = pipe(
    messages,
    max_new_tokens=50,
    do_sample=True,
    temperature=0.7,
    stopping_criteria=stopping_criteria,
    return_full_text=False,
)

# Clean up the response
summary = response[0]["generated_text"]
print(summary)
>>> "This model is a fine-tuned version of SmolLM2-360M for generating concise, one-sentence summaries of model and dataset cards from the Hugging Face Hub."

Framework Versions

  • TRL 0.14.0
  • Transformers 4.48.3
  • PyTorch 2.6.0
  • Datasets 3.2.0
  • Tokenizers 0.21.0
Downloads last month
646
Safetensors
Model size
362M params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for davanstrien/Smol-Hub-tldr

Finetuned
(28)
this model
Quantizations
2 models

Datasets used to train davanstrien/Smol-Hub-tldr

Spaces using davanstrien/Smol-Hub-tldr 2

Collection including davanstrien/Smol-Hub-tldr