Abstract:Learning rate(LR) is an important hyperparameter for effective training of deep neural networks(DNNs). However, there are still many difficulties and challenges in tuning the learning rate during the training of DNNs, and it is not easy to choose an optimal constant initial learning rate for training DNNs even with the goal of constant learning rate selection. The dynamic learning rate involves multi-step adjustment of the learning rate at different stages of the training process to achieve high accuracy and fast convergence: too small a learning rate in the adjustment process may cause the model to converge slowly or fall into a local optimum; while too large a learning rate may hinder convergence and cause oscillation and scattering. Therefore, we summarize the progress of learning rate research based on deep learning algorithms in recent years, and test and compare the performance of four types of learning rate clusters, including segmented decay learning rate, smooth decay learning rate, cyclic learning rate, and learning rate with hot start, on several common data sets, including convergence speed, robustness, and mean variance, etc. Finally, we summarize the full paper and discuss the remaining problems and future research trends in this field. Finally, we conclude the paper and give an outlook on the remaining problems and future research trends in this field.