Abstract:The Autonomous Underwater Vehicle (AUV) needs to perform operations such as energy replenishment and data download through autonomous recovery during or after a mission. The efficiency and accuracy of the recovery guidance determine the recovery efficiency of the AUV, which is crucial for its widespread application. To address short-range optical guidance and positioning in AUV recovery, this paper proposes a deep learning-based monocular and binocular pose measurement algorithm. Firstly, to address harsh underwater imaging conditions, a robust and reliable guided light source extraction algorithm is implemented, combining dark channel prior dehazing and the YOLO v9 target detection network, adaptable to different water qualities and light intensities. At the same time, in response to the feature matching problem in the recovery process, an omnidirectional feature matching algorithm that does not depend on the AUV speed to achieve 3D-2D feature matching is designed. In addition, in view of the typical multi-stage guidance characteristics of situated recovery, single and binocular guidance and positioning algorithm for different stages are designed based on the PnP principle and SVD decomposition. Finally, based on multiple simulations and physical experiments, the feasibility and effectiveness of the algorithm in accurate pose estimation are verified.