Abstract:We propose a novel learning strategy inspired by domain decomposition methods to accelerate the training of convolutional neural network (CNN). The proposed method is applied to residual networks (ResNet) for image classification tasks. The best result is achieved with ResNet32. In this case, we split ResNet32 into 4 sub-networks. Each sub-network has 0.47 M parameters which is 1/16 of the original ResNet32, thereby facilitating the learning process. Moreover, because the sub-networks can be trained in parallel, the computational time can therefore be reduced to 5.65 h from 8.53 h (by the conventional learning strategy) in the classification task with the CIFAR-10 dataset. We also find that the accuracy of the classification is improved to 94.09% from 92.82%. Similar improvements are also achieved with the CIFAR-100 and Food-101 datasets. In conclusion, the proposed learning strategy can reduce the computational time substantially with improved accuracy in classification. The results suggest that the proposed strategy can potentially be applied to train CNN with a large amount of parameters.