Abstract:Recurrent neural networks (RNNs) are the main implementation paradigms for deep neural network sequence model, and have been developed rapidly and widespreadly in last two decades. Now, RNNs are cornerstone and foundation underpinning for machine translation, machine question answering and sequence video analysis, and RNNs are also the mainstream modeling approaches for handwriting automatic synthesis, speech processing and image generation. In this paper, the branches of recurrent neural networks are classified in detail according to the network structure, which can be roughly divided into three categories: the first one is all sorts of variants of recurrent neural networks, which are structural variants based on the basic RNNs architecture, that is, modifying the internal structure of RNNs. The second kind is combined RNNs, which combine some classical other network models or structures with the first kind of RNNs to get better modeling effect. It is a very effective means. The third one is hybrid RNNs, which not only combine different network models, but also modify the internal structure of RNNs. In order to understand the RNNs more deeply, this paper also introduces the structure of recursive neural networks which are often confused with RNNs, and the difference and connection between recursive neural networks and RNNs. After a detailed description of the application background, network structure and model variants of the above models based on RNNs, the characteristics of each model are summarized and compared. Finally, the prospect and summary of the RNNs are given.