|
[1]Anonymous, The Danbooru Community, Branwen, G., Gokaslan, Danbooru2018: A Large-Scale Crowdsourced and Tagged Anime Illustration Dataset. https://www.gwern.net/Danbooru2018 [2]K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” International Conference on Learning Representations, 2015. [3]O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” International Conference on Medical Image Computing and Computer-Assisted Intervention, p. 234–241, 2015. [4]IJ. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative Adversarial networks,” Conference and Workshop on Neural Information Processing Systems, p. 2672–2680, 2014. [5]G. Liu, X. Chen, and Y. Hu, “Anime Sketch Coloring with Swish-Gated Residual U-Net,” Computational Intelligence and Intelligent Systems, p. 190-204, 2019. [6]Q. Chen, and V. Koltun, ”Photographic image synthesis with cascaded refinement networks, “ Proceedings of International Conference on Computer Vision, 2017. [7]L. Zhang, Y. Ji, X. Lin, “Style transfer for anime sketches with enhanced residual U-net and auxiliary classifier GAN,” Proceedings of Asian Conference on Pattern Recognition, 2017. [8]Taizan Yonetsuji. “Paintschainer,” github.com/pfnet/Paintschainer, 2017. [9]Y. Liu, Z. Qin, T. Wan, and Z. Luo, ”Auto-painter: cartoon image generation from sketch by using conditional Wasserstein generative adversarial networks,” Neurocomputing 311, p. 78–87, 2018. [10]K. He, X. Zhang, S. Ren, and J. Sun, ”Deep residual learning for image recognition,” Proceedings of 29th IEEE Conference on Computer Vision and Pattern Recognition, p. 770–778 , 2016. [11]Z. Zhang, Q. Liu, and Y. Wang, ”Road extraction by deep residual U-net,” IEEE Geoscience and Remote Sensing Letters, p. 749–753, 2018. [12]D.P. Kingma, and J. Ba, ” Adam: a method for stochastic optimization, “ Proceedings of the 3rd International Conference for Learning Representations, 2015. [13]P. Isola, J.Y. Zhu, T. Zhou, and A.A. Efros, ”Image-to-image translation with conditional adversarial networks,“ IEEE Conference on Computer Vision and Pattern Recognition, 2017. [14]J.Y. Zhu, T. Park, P. Isola, and A. A. Efros ,“Unpaired image-to-image translation using cycle-consistent adversarial networks,” IEEE International Conference on Computer Vision, 2017. [15]Z. Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli, ”Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, p. 600–612, 2004. [16]P. Ramachandran, B. Zoph, and Q.V. Le, ”Searching for activation functions,” CoRR abs/1710.05941, 2017. [17]J.L. Ba, J.R. Kiros, and G.E. Hinton, “Layer normalization,” CoRR abs/1607.06450, 2016 [18]Z. Cheng, Q. Yang, and B. Sheng, “Deep colorization,” IEEE International Conference on Computer Vision,2015. [19]L. Zhang, C. Li, T.T. Wong, Y. Ji, and C. Liu, “Two-stage sketch colorization,” ACM Transactions on Graphics (SIGGRAPH Asia 2018 issue), p. 261:1-261:14 , 2018. [20]R. Zhang, P. Isola, and A. A. Efros, “Colorful image colorization,” European Conference on Computer Vision, 2016. [21]E. S. L. Gastal, and M. M. Oliveira. “Domain transform for edge-aware image and video processing,” ACM SIG International Conference on Computer Graphics and Interactive Techniques, 2011. [22]H. Heo, and Y. Hwang, “Automatic Sketch Colorization using DCGAN,” 18th International Conference on Control, Automation and Systems,2018. [23]L. Fang,, L. Wang, G. Lu, D. Zhang, and J. Fu, “Hand-drawn grayscale image colorful colorization based on natural image,” The Visual Computer. ,2018. [24]L.A. Gatys, A.S. Ecker, and M. Bethge, “A neural algorithm of artistic style,” arXiv preprint arXiv:1508.06576 ,2015. [25]L.A. Gatys, A.S. Ecker, and M. Bethge, “Image style transfer using convolutional neural networks,” IEEE Conference on Computer Vision and Pattern Recognition, 2016. [26]J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros. “Unpaired imageto-image translation using cycle-consistent adversarial networks,” IEEE International Conference on Computer Vision , 2017. [27]A. Odena, C. Olah, and J. Shlens. “Conditional image synthesis with auxiliary classifier gans,” arXiv preprint arXiv:1610.09585, 2016. [28]J. Johnson, A. Alahi, and F.F. Li,” Perceptual Losses for Real-Time Style Transfer and Super-Resolution,” arXiv:1603.08155, 2016. [29]A. Radford, L. Metz, and S. Chintala. “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434, 2015. [30]M. Mirza, and S. Osindero. “Conditional generative adversarial nets,” CoRR, abs/1411.1784, 2014. [31]C. Ledig, L. Theis, F. Huszar, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang and W. Shi, “Photo-realistic single image super-resolution using a generative adversarial network,” IEEE Conference on Computer Vision and Pattern Recognition, 2016.
|