IFQA: Interpretable Face Quality Assessment
Byungho Jo1, Donghyeon Cho2, In Kyu Park1, and Sungeun Hong1
1Inha University 2Chungnam National University
WACV 2023
Which of ‘Image A’ or ‘Image B’ is closer to the given reference image or looks high-quality? General full-reference metrics (e.g. PSNR/SSIM), no-reference metrics (e.g. NIQE, BRISQUE, PI), and FIQA methods are inconsistent with human judgment. LPIPS agrees with human judgment but cannot be applied to the blind face restoration scenario. Our IFQA is consistent with human judgment and can provide interpretability maps where the brighter the area, the higher the quality.
Abstract
Existing face restoration models have relied on general assessment metrics that do not consider the characteristics of facial regions. Recent works have therefore assessed their methods using human studies, which is not scalable and involves significant effort. This paper proposes a novel face-centric metric based on an adversarial framework where a generator simulates face restoration and a discriminator assesses image quality. Specifically, our per-pixel discriminator enables interpretable evaluation that cannot be provided by traditional metrics. Moreover, our metric emphasizes facial primary regions considering that even minor changes to the eyes, nose, and mouth significantly affect human cognition. Our face-oriented metric consistently surpasses existing general or facial image quality assessment metrics by impressive margins. We demonstrate the generalizability of the proposed strategy in various architectural designs and challenging scenarios. Interestingly, we find that our IFQA can lead to performance improvement as an objective function. The code and models areavailable at https://github.com/VCLLab/IFQA.
Our Framework
Given HQ images, we obtain LQ images via BFR formulation. The generator (G) mimics face restoration models, while the discriminator (D) is used to evaluate image quality by determining high-quality regions as ‘real’ and low-quality or restored regions as ‘fake’. Through its U-Net architecture, the discriminator is able to evaluate the image pixel-by-pixel. FPRS allows the proposed metric to give more weight to facial primary regions that have a significant impact on human visual perception.
Quantitative Results
Comparison to No-Reference IQA Metrics [1-7]
Qualitative Results
FFHQ images [8]
In the wild face images [9]
BibTex
@InProceedings{Jo_2023_WACV,
author = {Byungho, Jo and Cho, Donghyeon and Park, In Kyu and Hong, Sungeun},
title = {IFQA: Interpretable Face Quality Assessment},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {January},
year = {2023},
pages = {-}
}
References
Anish Mittal, Rajiv Soundararajan, and Alan Conrad Bovik. Making a "Completely Blind" Image Quality Analyzer. In IEEE Signal Processing Letters, vol. 20, no. 3, pages 209– 212, March 2013.
Yochai Blau, and Tomer Michaeli. The Perception-Distortion Tradeoff. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6228–6237, June 2018.
Anish Mittal, Anush Krishna Moorthy, and Alan Conrad Bovik. No Reference Image Quality Assessment in the Spatial Domain. In IEEE Transactions on Image Processing, vol. 21, no. 12, pages 4695-4708 Dec. 2012.
Philipp. Terhörst, Jan Niklas Kolf, Naser Damer, Florian Kirchbuchner and Arjan. Kuijper. SER-FIQ: Unsupervised Estimation of Face Image Quality Based on Stochastic Embedding Robustness. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5650-5659, June 2020.
Javier Hernandez-Ortega, Javier Galbally, Julian Fierrez, Rudolf Haraksim and Laurent Beslay. FaceQnet: Quality Assessment for Face Recognition based on Deep Learning. In International Conference on Biometrics, pages 1–8, June 2019.
Javier Hernandez-Ortega, Javier Galbally, Julian Fierrez, and Laurent Beslay. Biometric Quality: Review and Application to Face Recognition with FaceQnet, arXiv, 2020.
Fu-Zhao Ou, Xingyu Chen, Ruixin Zhang, Yuge Huang, Shaoxin Li, Jilin Li, Yong Li, Liujuan Cao, and Yuan-Gen Wang. SDD-FIQA: Unsupervised Face Image Quality Assessment with Similarity Distribution Distance. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7666-7675, June 2021.
Tero Karras, Samuli Laine, and Timo Aila. A Style-Based Generator Architecture for Generative Adversarial Networks. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4396-4405, June 2019.
Tao Yang, Peiran Ren, Xuansong Xie, Lei Zhang. GAN Prior Embedded Network for Blind Face Restoration in the Wild. In Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 672-681, June 2021.