skar0's picture
Add random LoRA adapter with seed 0
86704a6 verified
metadata
base_model: AlignmentResearch/Llama-3.3-Tiny-Instruct

Random LoRA Adapter for Reference Model

This is a randomly initialized LoRA adapter for the AlignmentResearch/Llama-3.3-Tiny-Instruct model, specifically designed for use as a reference model.

Details

  • Base model: AlignmentResearch/Llama-3.3-Tiny-Instruct
  • Adapter type: Reference
  • Seed: 0
  • LoRA rank: 16
  • LoRA alpha: 32
  • target modules: all-linear

Usage

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load base model
base_model = AutoModelForCausalLM.from_pretrained("AlignmentResearch/Llama-3.3-Tiny-Instruct")
tokenizer = AutoTokenizer.from_pretrained("AlignmentResearch/Llama-3.3-Tiny-Instruct")

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "AlignmentResearch/Llama-3.3-Tiny-Instruct-lora-reference-0")

This reference adapter was created for testing purposes and contains random weights.