Inpainting is an image interpolation problem with broad applications in image and vision analysis. Described in the current expository paper are our recent efforts in developing universal inpainting models based on the Bayesian and variational principles. Discussed in detail are several variational inpainting models built upon geometric image models, the associated Euler-Lagrange PDEs and their geometric and dynamic interpretations, as well as effective computational approaches. Novel efforts are then made to further extend this systematic variational framework to the inpainting of oscillatory textures, interpolation of missing wavelet coefficients as in the wireless transmission of JPEG2000 images, as well as light-adapted inpainting schemes motivated by Weber's law in visual perception. All these efforts lead to the conclusion that unlike many familiar image processors such as denoising, segmentation, and compression, the performance of a variational/Bayesian inpainting scheme much more crucially depends on whether the image prior model well resolves the spatial coupling (or geometric correlation) of image features. As a highlight, we show that the Besov image models appear to be less interesting for image inpainting in the wavelet domain, highly contrary to their significant roles in thresholding-based denoising and compression. Thus geometry is the single most important keyword throughout this paper.
ASJC Scopus subject areas
- Applied Mathematics