--- license: cc-by-nc-sa-4.0 pipeline_tag: image-to-image tags: - pytorch - super-resolution --- ## 2x-AnimeSharpV2 Set **Scale:** 2 **Architecture:** RealPLKSR, MoSR GPS, ESRGAN **Author:** Kim2091 **License:** CC BY-NC-SA 4.0 **Purpose:** Anime **Subject:** **Input Type:** Images **Date:** 10-3-24 **Size:** **I/O Channels:** 3(RGB)->3(RGB) **Dataset:** HFA2k modified **Dataset Size:** 2-3k **OTF (on the fly augmentations):** No **Pretrained Model:** 4x_realplksr_mssim_pretrain & 4x-MoSR_GPS_Pretrain **Iterations:** 100k & 75k **Batch Size:** 6-10 **GT Size:** 64-256 **Description:** This is my first anime model in years. Hopefully you guys can find a good use-case for it. Included are 5 models: 1. ESRGAN (Highest Quality, slowest) 2. RealPLKSR (Higher quality, slower) 3. MoSR (Lower quality, faster) There are Sharp and Soft versions RPLSKR and MoSR, but only Soft for ESRGAN When to use each: - __Sharp:__ For heavily degraded sources. Sharp models have issues depth of field but are best at removing artifacts - __Soft:__ For cleaner sources. Soft models preserve depth of field but may not remove other artifacts as well __Notes:__ - MoSR doesn't work in chaiNNer currently - To use MoSR: 1. Use the ONNX version in tools like [VideoJaNai]() 2. Update spandrel in the latest version of ComfyUI The ONNX version may produce slightly different results than the .pth version. If you have issues, try the .pth model. __Comparisons:__ https://slow.pics/c/4UI20Qlu https://github.com/user-attachments/assets/001c0eb2-7558-4294-b722-a7371f87a912