|
[1] C. Dong, C.C. Loy, K. He, and X. Tang, “Learning a deep convolutional network for image super-resolution,” ECCV, pp. 184-199, 2014. [2] C. Dong, C. C. Loy, and X. Tang, “Accelerating the Super-Resolution Convolutional Neural Network,” ECCV, 2016. [3] M. Lin, Q. Chen, and S. Yan, “Network in network,” CVPR, 2014. [4] K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” ICCV, p. 1026–1034, 2015. [5] C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” CVPR, 2016. [6] J. Bruna, P. Sprechmann, and Y. LeCun, “Super-Resolution with Deep Convolutional Sufficient Statistics,” CVPR, 2015. [7] W. Shi, J. Caballero, F. Huszar, J. Totz, A. P. Aitken, R. Bishop,D. Rueckert, and Z. Wang, “Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network,” CVPR, p. 1874–1883, 2016. [8] C. Dong, C. C. Loy, K. He, and X. Tang, “Image super-resolution using deep convolutional networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, p. 38(2):295–307, 2016. [9] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” International Conference on Learning Representations, 2015. [10] A. Dosovitskiy and T. Brox, “Generating images with perceptual similarity metrics based on deep networks,” NIPS, 2016. [11] L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convolutional neural networks,” CVPR, 2016. [12] L. Gatys, A. S. Ecker, and M. Bethge, “Texture synthesis using convolutional neural networks,” NIPS, 2015. [13] J. Johnson, A. Alahi, and F. Li, “Perceptual losses for real-time style transfer and super- resolution,” ECCV, p. 694–711, 2016. [14] D. Ulyanov, V. Lebedev, A. Vedaldi, and V. Lempitsky, “Feed-forward synthesis of textures and stylized images,” ICML, 2016. [15] Y. Jo, S.W. Oh, J. Kang, and S.J. Kim, “Deep video superresolution network using dynamic upsampling filters without explicit motion compensation,” CVPR, p. 3224–3232, 2018. [16] X. Tao, H. Gao, R. Liao, J. Wang, and J. Jia, “Detail-revealing deep video super-resolution,” ICCV, 2017. [17] D. Liu, Z. Wang, Y. Fan, X. Liu, Z. Wang, S. Chang, and T. Huang, “Robust video super-resolution with learned temporal dynamics,” ICCV, p. 2526–2534, 2017. [18] R. Liao, X. Tao, R. Li, Z. Ma, and J. Jia, “Video superresolution via deep draft-ensemble learning,” ICCV, p. 531–539, 2015. [19] M.S. Sajjadi, R. Vemulapalli, and M. Brown, “Framerecurrent video super-resolution,” CVPR, 2018. [20] J. Caballero, C. Ledig, A.P. Aitken, A. Acosta, J. Totz, Z. Wang, and W. Shi, “Real-time video super-resolution with spatio-temporal networks and motion compensation,” CVPR, 2017. [21] C. Liu and D. Sun, “A bayesian approach to adaptive video super resolution,” CVPR, 2011. [22] W. Zhao and H.S. Sawhney, “Is super-resolution with optical flow feasible,” ECCV, 2002. [23] A. Kappeler, S. Yoo, Q. Dai, and A.K. Katsaggelos, “Video super-resolution with convolutional neural networks,” IEEE Transactions on Computational Imaging, pp. Vol. 2, No. 2, 2016. [24] O. Makansi, E. Ilg, and T. Brox, “End-to-end learning of video super-resolution with motion compensation,” GCPR, 2017. [25] M.S.M. Sajjadi, R. Vemulapalli, and M. Brown, “Frame-recurrent video superresolution,” CVPR, 2018. [26] T.H. Kim, M.S.M. Sajjadi, M. Hirsch, and B. Schölkopf, “Spatio-temporal transformer network for video restoration,” ECCV, 2018. [27] E. Perez-Pellitero, M.S. Sajjadi, M. Hirsch, and B. Scholkopf, “Photorealistic video super resolution,” arXiv preprint arXiv, 2018. [28] M.S.M. Sajjadi, B. Schölkopf, and M. Hirsch, “EnhanceNet: single image superresolution through automated texture synthesis,” ICCV, 2017. [29] M. Chu, Y. Xie, L. Leal-Taxixe, and N. Thuerey, “Temporally coherent gans for video super-resolution (TecoGAN),” arXiv preprint arXiv, 2018. [30] M. Jaderberg, K. Simonyan, A. Zisserman, and K. Kavukcuoglu, “Spatial transformer networks,” NIPS, 2015. [31] Wang, Yifan and Perazzi, Federico and McWilliams, Brian and Sorkine-Hornung, Alexander and Sorkine-Hornung, Olga and Schroers, Christopher, “A Fully Progressive Approach to Single-Image Super-Resolution,” The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2018. [32] W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang, “Deep laplacian pyramid networks for fast and accurate super-resolution,” In IEEE Conference on Computer Vision and Pattern Recognitionresolution, 2017. [33] W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang, “Fast and accurate image super-resolution with deep laplacian pyramid networks,” arXiv preprint arXiv:1710.01992, 2017. [34] G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten, “Densely connected convolutional networks,” arXiv preprint arXiv, p. 1608.06993, 2016. [35] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” In Proceedings of the IEEE conferenceon computer vision and pattern recognition, p. 770–778, 2016. [36] D. Balduzzi, M. Frean, L. Leary, J. P. Lewis, K. W.-D. Ma, and B. McWilliams, “The shattered gradients problem: If resnets are the answer, then what is the question?,” D. Precup and Y. W. Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, p. 342–350, Aug 2017. [37] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D.Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Gen-erative adversarial nets,” In Advances in neural information processing systems, p. 2672–2680, 2014. [38] C. Ledig, L. Theis, F. Husz´ar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, et al, “Photo-realistic single image super-resolution using a generative adversarial network,” arXiv preprint arXiv, 2016. [39] M. S. M. Sajjadi, B. Sch¨olkopf, and M. Hirsch, “Enhancenet: Single image super-resolution through automated texture synthesis,” CoRR, abs, 2016. [40] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. P. Smolley, “Least squares generative adversarial networks,” arXiv preprint ArXiv, 2016. [41] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014. [42] Y. Bengio, J. Louradour, R. Collobert, and J. Weston, “Curriculum learning,” In Proceedings of the 26th annual international conference on machine learning, p. 41–48, 2009. [43] T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of gans for improved quality, stability, and variation,” arXiv preprint arXiv, 2017. [44] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein GAN,” arXiv preprint arXiv, 2017. [45] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv, 2014.
|