TY - JOUR
T1 - Real-time Semantic Segmentation with Fast Attention
AU - Hu, Ping
AU - Perazzi, Federico
AU - Heilbron, Fabian Caba
AU - Wang, Oliver
AU - Lin, Zhe
AU - Saenko, Kate
AU - Sclaroff, Stan
N1 - KAUST Repository Item: Exported on 2021-06-29
PY - 2020
Y1 - 2020
N2 - Accurate semantic segmentation requires rich contextual cues (large receptive fields) and fine spatial details (high resolution), both of which incur high computational costs. In this paper, we propose a novel architecture that addresses both challenges and achieves state-of-the-art performance for semantic segmentation of high-resolution images and videos in real-time. The proposed architecture relies on our fast attention, which is a simple modification of the popular self-attention mechanism and captures the same rich contextual information at a small fraction of the computational cost, by changing the order of operations. Moreover, to efficiently process high-resolution input, we apply an additional spatial reduction to intermediate feature stages of the network with minimal loss in accuracy thanks to the use of the fast attention module to fuse features. We validate our method with a series of experiments, and show that results on multiple datasets demonstrate superior performance with better accuracy and speed compared to existing approaches for real-time semantic segmentation. On Cityscapes, our network achieves 74.4% mIoU at 72 FPS and 75.5% mIoU at 58 FPS on a single Titan X GPU, which is ~50% faster than the state-of-the-art while retaining the same accuracy.
AB - Accurate semantic segmentation requires rich contextual cues (large receptive fields) and fine spatial details (high resolution), both of which incur high computational costs. In this paper, we propose a novel architecture that addresses both challenges and achieves state-of-the-art performance for semantic segmentation of high-resolution images and videos in real-time. The proposed architecture relies on our fast attention, which is a simple modification of the popular self-attention mechanism and captures the same rich contextual information at a small fraction of the computational cost, by changing the order of operations. Moreover, to efficiently process high-resolution input, we apply an additional spatial reduction to intermediate feature stages of the network with minimal loss in accuracy thanks to the use of the fast attention module to fuse features. We validate our method with a series of experiments, and show that results on multiple datasets demonstrate superior performance with better accuracy and speed compared to existing approaches for real-time semantic segmentation. On Cityscapes, our network achieves 74.4% mIoU at 72 FPS and 75.5% mIoU at 58 FPS on a single Titan X GPU, which is ~50% faster than the state-of-the-art while retaining the same accuracy.
UR - http://hdl.handle.net/10754/666065
UR - https://ieeexplore.ieee.org/document/9265219/
UR - http://www.scopus.com/inward/record.url?scp=85096820631&partnerID=8YFLogxK
U2 - 10.1109/LRA.2020.3039744
DO - 10.1109/LRA.2020.3039744
M3 - Article
SN - 2377-3774
SP - 1
EP - 1
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
ER -