Abstract:To address the limitations of traditional image-based visual servoing (IBVS)—including its reliance on the image Jacobian model, insufficient adaptability under fixed gains, and frequent feature-point loss from the camera field of view—this paper proposes an improved fuzzy adaptive online sequential extreme learning machine visual servoing method (FA-OS-ELM-IBVS). The proposed approach employs an Online Sequential Extreme Learning Machine (OS-ELM) to directly estimate camera velocities from image errors, thereby avoiding explicit computation of the image Jacobian and its associated singularities. A Mamdani-type fuzzy gain regulator is constructed with the error norm, manipulability, and error convergence angle as inputs, enabling nonlinear and adaptive adjustment of servo gains. Furthermore, a hierarchical rectangular region together with a sigmoid-based smooth compensation strategy is introduced to achieve continuous and controllable field-of-view maintenance. A Lyapunov-based stability analysis is provided to rigorously establish the stability of the proposed control system. Simulation results demonstrate that, compared with conventional IBVS, the proposed method reduces the convergence time by approximately 7.09%–16.7% and shortens the camera trajectory length by about 27.48%–57.94%, while substantially decreasing integral performance indices such as IAE and ITAE. Notably, stable convergence is preserved even under pronounced mismatch between the assumed and actual depths. Further comparisons with representative improved IBVS schemes indicate additional gains in both convergence speed and servoing performance. Experiments on a six-DoF CGXi G6 robot platform corroborate these findings, confirming improved convergence efficiency and validating the robustness and effectiveness of the proposed uncalibrated framework in real-world scenarios.