Page 143 - 《软件学报》2021年第9期
P. 143
肖进胜 等:面向图像场景转换的改进型生成对抗网络 2767
[7] Wang KF, Zuo WM, Tan Y, et al. Generative adversarial networks: From generating data to creating intelligence. Acta Automatica
Sinica, 2018,44(5):769−774 (in Chinese with English abstract).
[8] Wang WL, LI ZR. Advances in generative adversarial network. Journal on Communications, 2018,39(2):135−148 (in Chinese with
English abstract).
[9] Xiao J, Tian H, Zhang Y, et al. Blind video denoising via texture-aware noise estimation. Computer Vision and Image
Understanding, 2018,169(4):1−13.
[10] Freedman D, Kisilev P. Object-to-object color transfer: Optimal flows and smsp transformations. In: Proc. of the 2010 IEEE
Computer Society Conf. on Computer Vision and Pattern Recognition. IEEE, 2010. 287−294.
[11] Laffont PY, Ren Z, Tao X, et al. Transient attributes for high-level understanding and editing of outdoor scenes. ACM Trans. on
Graphics, 2014,33(4):149.
[12] Tsai YH, Shen X, Lin Z, et al. Sky is not the limit: Semantic-aware sky replacement. ACM Trans. on Graphics, 2016,35(4):
149:1−149:11.
[13] Gatys LA, Ecker AS, Bethge M. Image style transfer using convolutional neural networks. In: Proc. of the IEEE Conf. on Computer
Vision and Pattern Recognition. 2016. 2414−2423.
[14] Li Y, Fang C, Yang J, et al. Diversified texture synthesis with feed-forward networks. In: Proc. of the IEEE Conf. on Computer
Vision and Pattern Recognition. 2017. 3920−3928.
[15] Chen D, Yuan L, Liao J, et al. Stylebank: An explicit representation for neural image style transfer. In: Proc. of the IEEE Conf. on
Computer Vision and Pattern Recognition. 2017. 1897−1906.
[16] Huang X, Belongie S. Arbitrary style transfer in real-time with adaptive instance normalization. In: Proc. of the IEEE Int’l Conf. on
Computer Vision. 2017. 1501−1510.
[17] Li S, Xu X, Nie L, et al. Laplacian-steered neural style transfer. In: Proc. of the 25th ACM Int’l Conf. on Multimedia. ACM, 2017.
1716−1724.
[18] Wang TC, Liu MY, Zhu JY, et al. High-resolution image synthesis and semantic manipulation with conditional GANs. In: Proc. of
the IEEE Conf. on Computer Vision and Pattern Recognition. 2018. 8798−8807.
[19] Liu MY, Tuzel O. Coupled generative adversarial networks. In: Proc. of the 30th Conf. on Neural Information Processing Systems.
Barcelona, 2016. 469−477.
[20] Shrivastava A, Pfister T, Tuzel O, et al. Learning from simulated and unsupervised images through adversarial training. In: Proc. of
the IEEE Conf. on Computer Vision and Pattern Recognition. 2017. 2107−2116.
[21] Zhu JY, Park T, Isola P, et al. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proc. of the
IEEE Int’l Conf. on Computer Vision. 2017. 2223−2232.
[22] Huang X, Liu MY, Belongie S, et al. Multimodal unsupervised image-to-image translation. In: Proc. of the European Conf. on
Computer Vision. 2018. 172−189.
[23] Luan F, Paris S, Shechtman E, et al. Deep photo style transfer. In: Proc. of the IEEE Conf. on Computer Vision and Pattern
Recognition. 2017. 4990−4998.
[24] Mirza M, Osindero S. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.
[25] Bousmalis K, Silberman N, Dohan D, et al. Unsupervised pixel-level domain adaptation with generative adversarial networks. In:
Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. 2017. 3722−3731.
[26] Chen Q, Koltun V. Photographic image synthesis with cascaded refinement networks. In: Proc. of the IEEE Int’l Conf. on
Computer Vision. 2017. 1511−1520.
[27] Dosovitskiy A, Brox T. Generating images with perceptual similarity metrics based on deep networks. In: Proc. of the 30th Conf.
on Neural Information Processing Systems. Barcelona, 2016. 658−666.
[28] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: Proc. of the IEEE Conf. on Computer
Vision and Pattern Recognition. 2015. 3431−3440.
[29] Chen Y, Lai YK, Liu YJ. CartoonGAN: Generative adversarial networks for photo cartoonization. In: Proc. of the IEEE Conf. on
Computer Vision and Pattern Recognition. 2018. 9465−9474.