File size: 1,102 Bytes
1906935 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
# AutoEncoder
A simple autoencoder trained on MNIST.
This model is part of the "Introduction to Generative AI" course.
For more details, visit the [GitHub repository](https://github.com/hussamalafandi/Generative_AI).
## Model Description
The AutoEncoder is a neural network designed to compress and reconstruct input data. It consists of an encoder that compresses the input into a latent space and a decoder that reconstructs the input from the latent representation.
## Training Details
- **Dataset**: MNIST (handwritten digits)
- **Loss Function**: Mean Squared Error (MSE)
- **Optimizer**: Adam
- **Learning Rate**: 0.001
- **Epochs**: 40
- **Latent dim**: 10
## Tracking
For detailed training logs and metrics, visit the [Weights & Biases run](https://wandb.ai/hussam-alafandi/mnist-autoencoder/runs/f81c7dgf?nw=nwuserhussamalafandi).
## Load Model
```python
from model import AutoEncoder
import torch
model = AutoEncoder()
model.load_state_dict(torch.load("model.pth"))
model.eval()
```
## License
This project is licensed under the MIT License. See the LICENSE file for details.
|