SmolLM2 score0_baseline20p_then_mix_rephrase123_with_mild_refusal45_metadata_5p-600B-step-150000 (Version: main)

Model Details

  • Architecture: SmolLM2
  • Parameters: 1.7B

Training Configuration

optimizer:
  class_path: torch.optim.AdamW
  init_args:
    lr: 0.0005
    weight_decay: 0.01
precision: bf16-mixed
seed: 42
train:
  global_batch_size: 1024
  max_seq_length: 2048
  max_tokens: 600000000000
  micro_batch_size: 8

Model Loading and Revision System

This repository hosts multiple revisions of the model. To load a specific revision, use the revision parameter. For example:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("locuslab/score0_baseline20p_then_mix_rephrase123_with_mild_refusal45_metadata_5p-600B-step-150000", revision="final")
tokenizer = AutoTokenizer.from_pretrained("locuslab/score0_baseline20p_then_mix_rephrase123_with_mild_refusal45_metadata_5p-600B-step-150000", revision="final")

Replace "final" with the desired revision.

Downloads last month
29
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support