TY - JOUR
T1 - Face Inpainting based on High-Level Facial Attributes
AU - Jampour, Mahdi
AU - Chen, Li
AU - Yu, Lap-Fai
AU - Zhou, Kun
AU - Lin, Stephen
AU - Bischof, Horst
PY - 2017/8/1
Y1 - 2017/8/1
N2 - We introduce a novel data-driven approach for face inpainting, which makes use of the observable region of an occluded face as well as its inferred high-level facial attributes, namely gender, ethnicity, and expression. Based on the intuition that the realism of a face inpainting result depends significantly on its overall consistency with respect to these high-level attributes, our approach selects a guidance face that matches the targeted attributes and utilizes it together with the observable input face regions to inpaint the missing areas. These two sources of information are balanced using an adaptive optimization, and the inpainting is performed on the intrinsic image layers instead of the RGB color space to handle the illumination differences between the target face and the guidance face to further enhance the resulting visual quality. Our experiments demonstrate that this approach is effective in inpainting facial components such as the mouth or the eyes that could be partially or completely occluded in the input face. A perceptual study shows that our approach generates more natural facial appearances by accounting for high-level facial attributes.
AB - We introduce a novel data-driven approach for face inpainting, which makes use of the observable region of an occluded face as well as its inferred high-level facial attributes, namely gender, ethnicity, and expression. Based on the intuition that the realism of a face inpainting result depends significantly on its overall consistency with respect to these high-level attributes, our approach selects a guidance face that matches the targeted attributes and utilizes it together with the observable input face regions to inpaint the missing areas. These two sources of information are balanced using an adaptive optimization, and the inpainting is performed on the intrinsic image layers instead of the RGB color space to handle the illumination differences between the target face and the guidance face to further enhance the resulting visual quality. Our experiments demonstrate that this approach is effective in inpainting facial components such as the mouth or the eyes that could be partially or completely occluded in the input face. A perceptual study shows that our approach generates more natural facial appearances by accounting for high-level facial attributes.
U2 - 10.1016/j.cviu.2017.05.008
DO - 10.1016/j.cviu.2017.05.008
M3 - Article
SN - 1077-3142
VL - 161
SP - 29
EP - 41
JO - Computer Vision and Image Understanding
JF - Computer Vision and Image Understanding
ER -