Review: Shake-Shake Regularization (Image Classification)

Review: Shake-Shake Regularization (Image Classification)Concept of Adding Noise to Gradient During Training, Outperforms WRN, ResNeXt and DenseNet.

Sik-Ho TsangBlockedUnblockFollowFollowingMay 13In this story, Shake-Shake Regularization (Shake-Shake), by Xavier Gastaldi from London Business School, is briefly reviewed.

The motivation of this paper is that data augmentation is applied at the input image, it might be also possible to apply data augmentation techniques to internal representations.

It is found in prior art that adding noise to the gradient during training helps training and generalization of complicated neural networks.

And Shake-Shake regularization can be seen as an extension of this concept where gradient noise is replaced by a form of gradient augmentation.

This is a paper in 2017 ICLR Workshop with over 10 citations.

And the long version in 2017 arXiv has got over 100 citations.

(Sik-Ho Tsang @ Medium)OutlineShake-Shake RegularizationExperimental ResultsFurther Evaluations1.

Shake-Shake RegularizationIn particular, 3-branch ResNet is studied in this paper as in the above figure, with the equation as below:With Shake-Shake Regularization, α is added:α is set to 0.

5 during test time, just like Dropout.

2.

Experimental Results2.

1.

CIFAR-1026 2×32d ResNet (i.

e.

the network has a depth of 26, 2 residual branches and the first residual block has a width of 32) is used.

Shake: All scaling coefficients are overwritten with new random numbers before the pass.

Even: All scaling coefficients are set to 0.

5 before the pass.

Keep: For the backward pass, keep the scaling coefficients used during the forward pass.

Batch: For each residual block i, the same scaling coefficient is applied for all the images in the mini-batch.

Image: For each residual block i, a different scaling coefficient is applied for each image in the mini-batch.

Error Rates of CIFAR-10Using Shake at forward pass has better performance.

And Shake-Shake-Image (S-S-I) obtains the best result for 26 2×64d ResNet and 26 2×64d ResNet.

2.

2.

CIFAR-100Error Rates of CIFAR-100Using Shake at forward pass again improves the performance.

Particularly, Shake-Even-Image (S-E-I) is the best.

2.

3.

Comparison with State-of-the-art ApproachesTest error (%) and Model Size on CIFAROn CIFAR-10, S-S-I outperforms WRN, ResNeXt and DenseNet.

On CIFAR-100, S-E-I outperforms WRN, ResNeXt and DenseNet as well.

3.

Further Evaluation3.

1.

Correlation Between Residual BranchesTo calculate the correlation, first forward the mini-batch, through the residual branch 1 and store the output tensor in yi(1).

Similar for residual branch 2 and store it in yi(2).

Then flatten yi(1) and yi(2) as flati(1) and flati(2) respectively.

And calculate the covariance between each corresponding item in 2 vectors.

Calculate the variances of flati(1) and flati(2).

Repeat until all images in the test set.

Use the resulting covariance and variances to calculate the correlation.

Correlation results on E-E-B and S-S-I modelsFirst of all, the correlation between the output tensors of the 2 residual branches seems to be reduced by the regularization.

This would support the assumption that the regularization forces the branches to learn something different.

The summation at the end of the residual blocks forces an alignment of the layers on the left and right residual branches.

The correlation is reduced by the regularization.

3.

2.

Regularization StrengthUpdate Rules for βLeft: Training curves (dark) and test curves (light) of models M1 to M5.

Right: Illustration of the different methods in the above Table.

The further away β is from α, the stronger the regularization effect.

3.

3.

Removing Skip Connection / Batch NormalizationArchitecture A is 26 2×32d but without skip connection.

Architecture B is the same as A but with only 1 convolutional layer per branch and twice the number of blocks.

Architecture C is the same as A but without batch normalization.

Error Rates of CIFAR-10The results of architecture A clearly show that shake-shake regularization can work even without a skip connection.

The results of architecture B show that regularization no longer works.

Architecture C makes the model difficult to converge, makes the model a lot more sensitive.

It is also really easy to make the model diverge.

With the simple yet novel idea and of course the positive results, it is published in 2017 ICLR Workshop which is very encouraging.

References[2017 arXiv] [Shake-Shake]Shake-Shake Regularization[2017 ICLR Workshop] [Shake-Shake]Shake-Shake Regularization of 3-Branch Residual NetworksMy Previous ReviewsImage Classification[LeNet] [AlexNet] [Maxout] [NIN] [ZFNet] [VGGNet] [Highway] [SPPNet] [PReLU-Net] [STN] [DeepImage] [SqueezeNet] [GoogLeNet / Inception-v1] [BN-Inception / Inception-v2] [Inception-v3] [Inception-v4] [Xception] [MobileNetV1] [ResNet] [Pre-Activation ResNet] [RiR] [RoR] [Stochastic Depth] [WRN] [FractalNet] [Trimps-Soushen] [PolyNet] [ResNeXt] [DenseNet] [PyramidNet] [DRN] [DPN] [Residual Attention Network] [MSDNet] [ShuffleNet V1] [SENet]Object Detection[OverFeat] [R-CNN] [Fast R-CNN] [Faster R-CNN] [MR-CNN & S-CNN] [DeepID-Net] [CRAFT] [R-FCN] [ION] [MultiPathNet] [NoC] [Hikvision] [GBD-Net / GBD-v1 & GBD-v2] [G-RMI] [TDM] [SSD] [DSSD] [YOLOv1] [YOLOv2 / YOLO9000] [YOLOv3] [FPN] [RetinaNet] [DCN]Semantic Segmentation[FCN] [DeconvNet] [DeepLabv1 & DeepLabv2] [CRF-RNN] [SegNet] [ParseNet] [DilatedNet] [DRN] [RefineNet] [GCN] [PSPNet] [DeepLabv3]Biomedical Image Segmentation[CUMedVision1] [CUMedVision2 / DCAN] [U-Net] [CFS-FCN] [U-Net+ResNet] [MultiChannel] [V-Net] [3D U-Net] [M²FCN] [SA] [QSA+QNT] [3D U-Net+ResNet]Instance Segmentation[SDS] [Hypercolumn] [DeepMask] [SharpMask] [MultiPathNet] [MNC] [InstanceFCN] [FCIS]Super Resolution[SRCNN] [FSRCNN] [VDSR] [ESPCN] [RED-Net] [DRCN] [DRRN] [LapSRN & MS-LapSRN] [SRDenseNet]Human Pose Estimation[DeepPose] [Tompson NIPS’14] [Tompson CVPR’15] [CPM].

. More details

Leave a Reply