Face Inpainting based on High-Level Facial Attributes

Mahdi Jampour, Li Chen, Lap-Fai Yu, Kun Zhou, Stephen Lin, Horst Bischof

Research output: Contribution to journalArticlepeer-review


We introduce a novel data-driven approach for face inpainting, which makes use of the observable region of an occluded face as well as its inferred high-level facial attributes, namely gender, ethnicity, and expression. Based on the intuition that the realism of a face inpainting result depends significantly on its overall consistency with respect to these high-level attributes, our approach selects a guidance face that matches the targeted attributes and utilizes it together with the observable input face regions to inpaint the missing areas. These two sources of information are balanced using an adaptive optimization, and the inpainting is performed on the intrinsic image layers instead of the RGB color space to handle the illumination differences between the target face and the guidance face to further enhance the resulting visual quality. Our experiments demonstrate that this approach is effective in inpainting facial components such as the mouth or the eyes that could be partially or completely occluded in the input face. A perceptual study shows that our approach generates more natural facial appearances by accounting for high-level facial attributes.
Original languageEnglish
Pages (from-to)29-41
JournalComputer Vision and Image Understanding
Publication statusPublished - 1 Aug 2017

Cite this