File size: 979 Bytes
2cea685 86704a6 2cea685 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
---
base_model: AlignmentResearch/Llama-3.3-Tiny-Instruct
---
# Random LoRA Adapter for Reference Model
This is a randomly initialized LoRA adapter for the `AlignmentResearch/Llama-3.3-Tiny-Instruct` model, specifically designed for use as a reference model.
## Details
- **Base model**: AlignmentResearch/Llama-3.3-Tiny-Instruct
- **Adapter type**: Reference
- **Seed**: 0
- **LoRA rank**: 16
- **LoRA alpha**: 32
- **target modules**: all-linear
## Usage
```python
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load base model
base_model = AutoModelForCausalLM.from_pretrained("AlignmentResearch/Llama-3.3-Tiny-Instruct")
tokenizer = AutoTokenizer.from_pretrained("AlignmentResearch/Llama-3.3-Tiny-Instruct")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "AlignmentResearch/Llama-3.3-Tiny-Instruct-lora-reference-0")
```
This reference adapter was created for testing purposes and contains random weights.
|