Abstract:During the learning process of the deep convolution neural network(DCNN), the initial values of convolution kernels are usually randomly assigned. In addition, the learning rule of network parameters based on gradient descent usually results in gradient vanishing phenomenon. Aiming at the above problems, a learning method for the DCNN based on deconvolution feature extraction is proposed. Firstly, an unsupervised two-layer stacked deconvolution neural network is used to learn feature mapping matrixes from the original images. Then, the learned feature mapping matrixes are used as the convolution kernels to convolve and pool with the images in a layer-wise manner. Finally, the DCNN is fine-tuned by using the mini-batch stochastic gradient descent method with a momentum coefficient, which can avoid the gradient vanishing problem. Experimental results on MNIST, CIFAR-10 and CIFAR-100 data sets show that, the proposed method can effectively improve the accuracy of image classification.