Abstract
We address the problem of texture segmentation by grouping dense pixel-wise descriptors. We introduce and construct learned Shape-Tailored Descriptors that aggregate image statistics only within regions of interest to avoid mixing statistics of different textures, and that are invariant to complex nuisances (e.g., illumination, perspective and deformations). This is accomplished by training a neural network to discriminate base shape-tailored descriptors of oriented gradients at various scales. These descriptors are defined through partial differential equations to obtain data at various scales in arbitrarily shaped regions. We formulate and optimize a joint optimization problem in the segmentation and descriptors to discriminate these base descriptors using the learned metric, equivalent to grouping learned descriptors. Experiments on benchmark datasets show that the descriptors learned on a small dataset of segmented images generalize well to unseen textures in other datasets, showing the generic nature of these descriptors. We also show state-of-the-art results on texture segmentation benchmarks.
Original language | English (US) |
---|---|
Title of host publication | 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition |
Publisher | Institute of Electrical and Electronics Engineers (IEEE) |
Pages | 666-674 |
Number of pages | 9 |
ISBN (Print) | 9781538664209 |
DOIs | |
State | Published - Dec 18 2018 |
Bibliographical note
KAUST Repository Item: Exported on 2020-10-01Acknowledged KAUST grant number(s): OCRF-2014-CRG3-62140401
Acknowledgements: This research was funded by KAUST OCRF-2014-CRG3-62140401 and VCC.