Spfusionnet: Sketch segmentation using multi-modal data fusion

F Wang, S Lin, H Wu, H Li, R Wang… - … on Multimedia and …, 2019 - ieeexplore.ieee.org
F Wang, S Lin, H Wu, H Li, R Wang, X Luo, X He
2019 IEEE International Conference on Multimedia and Expo (ICME), 2019ieeexplore.ieee.org
The sketch segmentation problem remains largely unsolved because conventional methods
are greatly challenged by the highly abstract appearances of freehand sketches and their
numerous shape variations. In this work, we tackle such challenges by exploiting different
modes of sketch data in a unified framework. Specifically, we propose a deep neural
network SPFusionNet to capture the characteristic of sketch by fusing from its image and
point set modes. The image modal component SketchNet learns hierarchically abstract ro …
The sketch segmentation problem remains largely unsolved because conventional methods are greatly challenged by the highly abstract appearances of freehand sketches and their numerous shape variations. In this work, we tackle such challenges by exploiting different modes of sketch data in a unified framework. Specifically, we propose a deep neural network SPFusionNet to capture the characteristic of sketch by fusing from its image and point set modes. The image modal component SketchNet learns hierarchically abstract ro-bust features and utilizes multi-level representations to produce pixel-wise feature maps, while the point set-modal component SPointNet captures local and global contexts of the sampled point set to produce point-wise feature maps. Then our framework aggregates these feature maps by a fusion network component to generate the sketch segmentation result. The extensive experimental evaluation and comparison with peer methods on our large SketchSeg dataset verify the effectiveness of the proposed framework.
ieeexplore.ieee.org
Showing the best result for this search. See all results