Based on the requirement of embedded devices for deep neural network lightweight, and combined with the idea of modularization and layer by layer processing, a lightweight method based on deep sparse low rank decomposition is designed to aim at the lightweight of the mainstream detection and recognition network Faster RCNN. In view of characteristics of the Faster RCNN network architecture, firstly, initially lightening the backbone part of the Faster RCNN feature extraction network is realized through the deep separable convolution and the sparse low-rank theory. Secondly, sparse low-rank pruning is used to further lighten the backbone network in the way of “layer by layer channel pruning, layer by layer retraining, and layer by layer tuning”. Then, the region proposal network is lightened based on the Tensor-train decomposition theory, and the performance loss is ensured as low as possible. Sparse low rank decomposition and channel pruning are applied to the recognition and classification network again, which results in more compression times, less memory and less computing resources required. Finally, the input feature knowledge distillation of the RPN network based on region of interest location perception improves the detection and recognition performance. Numerical experiments show that the proposed method can achieve model compression by 100 times, and the detection and recognition rate is only reduced by 5%.