Image Inpainting for Missing Facial Data Recovery in Security Settings
Main Article Content
Abstract
The Pervasive adoption of face masks during the pandemic of COVID-19 has introduced significant complexities for facial recognition systems, particularly in the realms of security and authentication. This paper tackles the intricate problem of incomplete facial data resulting from mask occlusion, a pervasive issue across various image processing and recognition domains. Our approach centers on a sophisticated high-resolution face image completion technique utilizing Generative Adversarial Networks (GANs). The Generator network, meticulously crafted in a U-Net architecture fashion, integrates crucial skip connections to retain vital spatial information from the U-Net’s encoder path. We optimize our methodology through a holistic total Generator Loss function, augmented with Mean Absolute Error (MAE) loss and a regularization term (λ) to prevent overfitting. Training our model on the esteemed CelebA-HQ dataset, we generate synthetic masked faces by realistically simulating mask placements on original images. Our method showcases exceptional performance, achieving a Peak Signal-to-Noise Ratio (PSNR) of 22.25 and a Structural Similarity Index Measure (SSIM) of 0.874. These results surpass conventional GANs, non-learning-based patch-matching methods, and even certain diffusion-based techniques by a significant margin of 1.16%. This capability to reconstruct human faces despite masks plays a pivotal role in enhancing security protocols during the ongoing pandemic.
Article Details
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.