Random LoRA Adapter for Reference Model
This is a randomly initialized LoRA adapter for the AlignmentResearch/Llama-3.3-Tiny-Instruct
model, specifically designed for use as a reference model.
Details
- Base model: AlignmentResearch/Llama-3.3-Tiny-Instruct
- Adapter type: Reference
- Seed: 0
- LoRA rank: 16
- LoRA alpha: 32
- target modules: all-linear
Usage
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load base model
base_model = AutoModelForCausalLM.from_pretrained("AlignmentResearch/Llama-3.3-Tiny-Instruct")
tokenizer = AutoTokenizer.from_pretrained("AlignmentResearch/Llama-3.3-Tiny-Instruct")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "AlignmentResearch/Llama-3.3-Tiny-Instruct-lora-reference-0")
This reference adapter was created for testing purposes and contains random weights.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for AlignmentResearch/Llama-3.3-Tiny-Instruct-lora-reference-0
Base model
JackFram/llama-68m