Abstract:Most RGB-D salient object detection methods use a symmetric structure during the fusion process to perform the same operation on the RGB features and Depth features. This fusion method ignores the difference between the RGB image and the Depth image, which is likely to cause false detection results. In order to solve it, this paper proposes a cross-modal fusion RGB-D salient object detection method based on an asymmetric structure. In this paper, a global perception module(GPM) is designed to extract the global features of RGB images, and a deep denoising module(DDM) is designed to filter out a large amount of noise in low-quality depth images.Then through the asymmetric fusion module designed in this paper, we make full use of the difference between the two features differences, use depth feature to locate salient objects, to guide RGB feature fusion, to complement the detailed information of salient objects, and use the respective advantages of the two features to form a complement. A large number of experiments have been carried out on four publicly available RGB-D salient object detection datasets, and the experimental results have verified that the proposed method outperforms the state-of-the-art methods.