Abstract:In human joint angle prediction, the information acquired from a single sensor is extremely limited and highly prone to environmental disturbances. Meanwhile, in the existing joint angle prediction studies based on multi-sensors, the increase in the dimensionality of input data leads to a defect of insufficient feature utilization in traditional fusion methods, which will result in a decline in prediction accuracy. Aiming at accurately obtaining the motion state of lower-limb rehabilitation exoskeleton robots, we have proposed a multi-modal data fusion method to predict the rehabilitation robot joint angle. The algorithm adopts a multi channel fusion high-resolution network structure designed specifically to handle the human 3D pose feature extraction tasks and convolutional neural networks to extract plantar pressure features. Then, based on long short-term memory networks, the temporal correlations of features are obtained. Moreover, in order to accurately predict the joint angles of patients, a fusion network based on the attention mechanism is proposed. The results show that, under three groups of speeds, the root mean square error of the proposed algorithm is 0.039, representing an improvement of more than 38% compared with the single-modal joint angle prediction method; the coefficient of determination is 0.948, representing an improvement of more than 17% compared with the single-modal joint angle prediction method.