Computational Visual Media  2020, Vol. 6 Issue (4): 455-466    doi: 10.1007/s41095-020-0185-5
 Research Article
Weight asynchronous update: Improving the diversity of filters in a deep convolutional network
Dejun Zhang1(),Linchao He2(),Mengting Luo2(),Zhanya Xu1,(✉)(),Fazhi He3()
1 School of Geography and Information Engineering, China University of Geosciences, Wuhan 430074, China
2 College of Information and Engineering, Sichuan Agricultural University, Yaan 625014, China
3 School of Computer, Wuhan University, Wuhan 430072, China

Abstract

Deep convolutional networks have obtained remarkable achievements on various visual tasks due to their strong ability to learn a variety of features. A well-trained deep convolutional network can be compressed to 20%-40% of its original size by removing filters that make little contribution, as many overlapping features are generated by redundant filters. Model compression can reduce the number of unnecessary filters but does not take advantage of redundant filters since the training phase is not affected. Modern networks with residual, dense connections and inception blocks are considered to be able to mitigate the overlap in convolutional filters, but do not necessarily overcome the issue. To do so, we propose a new training strategy, weight asynchronous update, which helps to significantly increase the diversity of filters and enhance the representation ability of the network. The proposed method can be widely applied to different convolutional networks without changing the network topology. Our experiments show that the stochastic subset of filters updated in different iterations can significantly reduce filter overlap in convolutional networks. Extensive experiments show that our method yields noteworthy improvements in neural network performance.

Received: 06 April 2020      Published: 30 November 2020
Fund:  National Natural Science Foundation of China(Grant No. 61702350)
Corresponding Authors: Zhanya Xu     E-mail: zhangdejun@cug.edu.cn;fpsandnoob@hotmail.com;sookie0331@ icloud.com;zhanyaxu@163.com;fzhe@whu.edu.cn
About author: Dejun Zhang received his Ph.D. degree from the Department of Computer Science, Wuhan University, China, in 2015. He is currently an associate professor of the School of Geography and Information Engineering, China University of Geosciences. Since 2015, he has served as a senior member of the China Society for Industrial and Applied Mathematics (CSIAM) and is a member of the geometric design & computing committee of CSIAM. Since 2020, he has been serving as a China Computer Federation (CCF) Senior Member. He was a technical program chair for the 5th Asian Conference on Pattern Recognition (ACPR 2019). His research areas include computer vision, computer graphics, image and video processing, and deep learning. He has published more than 20 refereed articles in journals and conference proceedings.|Linchao He is currently a senior student in the College of Information and Engineering, Sichuan Agricultural University (SICAU) in Yaan, China. He is a member of the CCF. His research interests include image classification, object detection, action recognition, and deep learning.|Mengting Luo is currently a senior student in the College of Information and Engineering, Sichuan Agricultural University (SICAU) in Yaan, China. She is a member of the CCF. Her research interests include image classification, object detection, and action recognition.|Zhanya Xu received his Ph.D. degree from China University of Geosciences in 2010. He is currently a lecturer in the School of Geography and Engineering, China University of Geosciences. He is a member of the CCF. His research areas include spatial information services, big data processing, and intelligent computing. He has published more than 20 papers in journals and conferences.|Fazhi He received his Ph.D. degree from Wuhan University of Technology. He was a postdoctoral researcher in the State Key Laboratory of CAD & CG at Zhejiang University, a visiting researcher in Korea Advanced Institute of Science & Technology and a visiting faculty member in the University of North Carolina at Chapel Hill. Now he is a professor in the School of Computing, Wuhan University. He has served as a senior member of CSIAM and a member of the geometric design & computing committee of CSIAM. Currently, he is a member of the editorial board for the Journal of Computer-Aided Design & Computer Graphics. His research interests are computer graphics, computer-aided design, and computer supported cooperative work.
 Fig. 1 Features learned by sync and async updating. c, $w$, and $h$ are channel, width, and height, respectively."> Fig. 2 Weight asynchronous update training strategy. $c$, $w$, and $h$ are channel, width, and height, respectively. Table 1 WAU shows significant performance improvement over the baseline on both CIFAR-10 and CIFAR-100 (see Ref. [28], chap. 3) Table 2 Object detection accuracy (%) for Faster R-CNN [32] on the COCO minival set [33]. All models were trained on the trainval35k set with images of size 600 pixels Table 3 Object detection accuracy (%) using Faster R-CNN [32] on the Pascal VOC 2007 test set. Models were trained on the Pascal VOC 2007 trainval set Fig. 3 Filter correlation using sync and async. (a) Correlation of 32 filters within a single layer. (b) Correlation of 64 filters between two layers inside a residual block. Upper and lower triangles respectively represent the results of sync and async weight updating training methods. Table 4 Different training flows. Both strategies lead to accuracy (%) improvement for ResNet-32 trained on CIFAR-10, but ASA is better Table 5 Comparison of the test error (%) of WAU on CIFAR-10 with other related training strategies Fig. 4 Test accuracy and convergence speed of our WAU method and a baseline, for various convolutional networks, on CIFAR-10. Fig. 5 Ablation of affine transform in BN. Performance on an image recognition task is shown for four different models: WAU and Dropout, each with and without BN affine transform. Table 6 Comparison of the classification accuracy (%) of WAU with different async rate r. High enough async rate always results in improved performance, and the method is not sensitive to the hyperparameter."> Fig. 6 Influence of hyper-parameter $r$. High enough async rate always results in improved performance, and the method is not sensitive to the hyperparameter.