阅读论文:Image Super-Resolution Using Very Deep Residual Channel Attention Networks
上文链接
4. Experiments(实验)
- Settings
Following [23, 36, 43, 44], we use 800 training images from DIV2K dataset [36] as training set. For testing, we use five standard benchmark datasets: Set5 [1], Set14 [41], B100 [24], Urban100 [13], and Manga109 [25]. We conduct experiments with Bicubic (BI) and blur-downscale (BD) degradation models [42–44]. The SR results are evaluated with PSNR and SSIM [40] on Y channel (i.e., luminance) of transformed YCbCr space. Data augmentation is performed on the 800 training images, which are randomly rotated by 90◦ , 180◦ , 270◦ and flipped horizontally. In each training batch, 16 LR color patches with the size of 48 × 48 are extracted as inputs. Our model is trained by ADAM optimizor [18] with β1 = 0.9, β2 = 0.999, and ǫ = 10 8 . The initial leaning rate is set to 10 4 and then decreases to half every 2 × 105 iterations of back-propagation. We use PyTorch [28] to implement our models with a Titan Xp GPU.3