This repository contains the code for the research project:
"Enhancing Perceptual Quality of Images using Deep Residual U-Net and PatchGAN Discriminator"
We propose a U-Net-like generator with residual connections and a PatchGAN discriminator, optimised with perceptual loss (VGG16) to enhance both structural fidelity and perceptual quality of images.
- Residual U-Net Generator – captures fine details and global context.
- PatchGAN Discriminator – enforces local structural realism.
- Perceptual Loss (VGG16) – preserves colour, clarity, and high-level features.
- Quantitative Metrics:
- SSIM: 0.9270
- FSIM: 0.9998
Model Variant (Epochs) | SSIM | FSIM |
---|---|---|
10 epochs | 0.8912 | 0.9965 |
25 epochs | 0.9134 | 0.9982 |
55 epochs (final) | 0.9270 | 0.9998 |
👉 Final model (55 epochs) produced the best perceptual and structural quality.

- DIV2K Dataset (high-quality image dataset)
- Source: Kaggle – DIV2K Dataset
- Clone the repo:
git clone https://github.com/MAvRK7/Perceptually-Aware-Image-Enhancement-with-Deep-Residual-U-Net-and-PatchGANs.git cd Perceptually-Aware-Image-Enhancement-with-Deep-Residual-U-Net-and-PatchGANs
In this page, the first file- enhancing_images.ipynb contrains the code for the model trained on 10 epochs.
While the second file - image_proj.ipynb contains the code trained on 25 and 55 epcohs. The results shown are from the model that was trained on 55 epochs.
The code can be run by opening any of the files.

- Low-light photography – improves clarity and detail retention.
- Medical imaging – enhances diagnostic quality while preserving structure.
- Autonomous vehicles – improves perception in adverse conditions.


Contributions are welcome! Please open an issue or submit a pull request.
If you use this work, please cite:
@inproceedings{raghav2025perceptual, title={Enhancing Perceptual Quality of Images using Deep Residual U-Net and PatchGAN Discriminator}, author={Raghav, Satvik and Narkedimilli, S. and Ayitapu, P. and Karthikeya, R. and Lalitha, S.}, booktitle={3rd Int. Conf. on New Trends in Computing Sciences (ICTCS)}, year={2025} }
For queries: satvikraghav007@gmail.com