Enhanced SRResNet with Attention: Optimizing Image Super-Resolution

Abstract

This project explores the enhancement of image resolution using Super-Resolution Residual Neural Network (SRResNet) with attention mechanisms. The aim is to reconstruct high-resolution images from low-resolution inputs more accurately than traditional upscaling methods. Using benchmark datasets like Set5, Set14, and BSD100, the model was implemented in PyTorch and evaluated using PSNR, MSE, and SSIM metrics. The enhanced SRResNet with attention achieved superior performance, improving textural detail and overall clarity while addressing limitations in conventional interpolation approaches.

Key Contributions

Conclusion

The enhanced SRResNet model demonstrates its ability to outperform traditional interpolation and base SRResNet in image upscaling tasks. Attention mechanisms helped emphasize critical details in the reconstruction process. The model provides a strong foundation for future innovations in image processing, particularly for applications requiring high-fidelity image recovery.

Limitation

The model requires substantial computational resources and may not generalize well to extremely noisy or out-of-domain low-resolution inputs. Also, while PSNR improved, the SSIM comparison indicates that structural consistency remains a challenge.

Future Work

Future improvements include integrating multi-scale attention modules, applying domain adaptation techniques, exploring lightweight models for edge devices, and extending to video super-resolution. A user-friendly interface and real-time web deployment are also planned.

🧠 Student Innovator: Zijun Liu is an aspiring computer vision researcher with a focus on deep learning and intelligent image restoration. Under the guidance of Dr. Happy Nkanta Monday, Zijun combined SRResNet with attention to push the frontier of image enhancement. He is passionate about AI-powered graphics and plans to explore vision transformers for future projects.
📄 Download Poster 🔗 View Code on GitHub