Model Card for Model ID
dream interpreter - Jungian Mystic model.
Model Details
Model Description
nidra_v1 has been fine-tuned on a dataset of 1600 inout-output pairs to interpret dreams, providing an overall summary and mystical/psychological observations.
- Developed by: M1K3wn
- Model type: Seq2Seq
- Language(s) (NLP): English
- License: [More Information Needed]
- Finetuned from model: google/flan-t5-base
Model Sources
- Repository: TBC
Uses
Is intended to be used to generate dream interpretations with a mystical/psychological interpretation included.
Direct Use
Can generate 80-120 word analysis of dreams when prompted with: "Interpret this dream: < input dream >"
Out-of-Scope Use
Will likely not work well for anything but what it was trained for.
Bias, Risks, and Limitations
This model is wild and it's advice should not be followed by anyone, anywhere, ever.
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
How to Get Started with the Model
"Interpret this dream: < input dream >"
Training Details
Training Data
All synthetically generated. 1600 input-output pairs. Example:
"input": "I was chasing a shadow through an empty city, but every time I got close, it slipped away into the darkness.",
"target": "Chasing a shadow symbolises the pursuit of elusive aspects of yourself or unresolved issues from the past. The empty city suggests feelings of isolation or a lack of direction in your waking life. The shadow slipping into darkness reflects the difficulty of confronting fears or hidden truths. Psychologically, this dream may indicate that you are on a journey of self-discovery, attempting to integrate repressed parts of your psyche. Mystically, the shadow could represent an invitation to explore your inner depths and embrace the unknown. This dream encourages you to persist in seeking clarity and to face whatever fears or doubts arise, as they hold the key to personal growth."
Training Procedure
Basically shots in the dark.
Training Hyperparameters
train_ratio: float = 0.9
batch_size: int = 2
gradient_accumulation_steps: int = 6
num_epochs: int = 6
learning_rate: float = 5e-4
weight_decay: float = 0.02
warmup_steps = 40
lr_scheduler_type: str = "cosine"
max_grad_norm: float = 0.5
weight_decay: float = 0.012
epochs: int = 5
gradient_accumulation: int = 8
LoRA: lora_r: 8 lora_alpha: 16 dropout: 0.15 attention_layers: "q", "k", "v", "o"
- Training regime: fp16: False # Disable mixed precision bf16: False # Disable bfloat16 no_cuda: True # Ensure CUDA isn't used use_mps_device: True # utilise MPS with pytorch
Framework versions
- PEFT 0.14.0
- Downloads last month
- 64
Model tree for m1k3wn/nidra-v1
Base model
google/flan-t5-base