|
[1]I.Goodfellow et al., “Generative adversarial networks,” Commun. ACM, vol. 63, no. 11, pp. 139–144, 2020, doi: 10.1145/3422622. [2]H.Zhang et al., “StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 41, no. 8, pp. 1947–1962, 2019, doi: 10.1109/TPAMI.2018.2856256. [3]T.Xu et al., “AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 1316–1324, 2018, doi: 10.1109/CVPR.2018.00143. [4]Z.Qi, C.Fan, L.Xu, X.Li, andS.Zhan, “MRP-GAN: Multi-resolution parallel generative adversarial networks for text-to-image synthesis,” Pattern Recognit. Lett., vol. 147, pp. 1–7, 2021, doi: 10.1016/j.patrec.2021.02.020. [5]T. Y.Lin et al., “Microsoft COCO: Common objects in context,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 8693 LNCS, no. PART 5, pp. 740–755, 2014, doi: 10.1007/978-3-319-10602-1_48. [6]M.Kang andJ.Park, “ContraGAN: Contrastive learning for conditional image generation,” Adv. Neural Inf. Process. Syst., vol. 2020-Decem, no. NeurIPS, 2020. [7]J.Jeong andJ.Shin, “Training GANs with Stronger Augmentations via Contrastive Discriminator,” no. 2020, pp. 1–23, 2021, [Online]. Available: http://arxiv.org/abs/2103.09742 [8]H.Zhang, J. Y.Koh, J.Baldridge, H.Lee, andY.Yang, “Cross-Modal Contrastive Learning for Text-to-Image Generation,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., no. 1, pp. 833–842, 2021, doi: 10.1109/CVPR46437.2021.00089. [9]T.DeVries andG. W.Taylor, “Improved Regularization of Convolutional Neural Networks with Cutout,” 2017, [Online]. Available: http://arxiv.org/abs/1708.04552 [10]S.Gidaris, P.Singh, andN.Komodakis, “Unsupervised representation learning by predicting image rotations,” 6th Int. Conf. Learn. Represent. ICLR 2018 - Conf. Track Proc., no. 2016, pp. 1–16, 2018. [11]A. G.Howard, “Some improvements on deep convolutional neural network based image classification,” 2nd Int. Conf. Learn. Represent. ICLR 2014 - Conf. Track Proc., 2014. [12]N.Reimers andI.Gurevych, “Sentence-BERT: Sentence embeddings using siamese BERT-networks,” EMNLP-IJCNLP 2019 - 2019 Conf. Empir. Methods Nat. Lang. Process. 9th Int. Jt. Conf. Nat. Lang. Process. Proc. Conf., pp. 3982–3992, 2020, doi: 10.18653/v1/d19-1410. [13]X.Chen andK.He, “Exploring simple Siamese representation learning,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., no. Figure 1, pp. 15745–15753, 2021, doi: 10.1109/CVPR46437.2021.01549. [14]J.Devlin, M. W.Chang, K.Lee, andK.Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” NAACL HLT 2019 - 2019 Conf. North Am. Chapter Assoc. Comput. Linguist. Hum. Lang. Technol. - Proc. Conf., vol. 1, no. Mlm, pp. 4171–4186, 2019. [15]A.Radford, L.Metz, andS.Chintala, “UNSUPERVISED REPRESENTATION LEARNING WITH DEEP CONVOLUTIONAL GENERATIVE ADVERSARIAL NETWORKS”. [16]P.Ghosh, P.Singh Gupta, R.Uziel, A.Ranjan, M. J.Black, andT.Bolkart, “GIF: Generative Interpretable Faces,” 2020, Accessed: Jul.11, 2022. [Online]. Available: http://gif.is.tue.mpg.de. [17]T.Karras NVIDIA andS.Laine NVIDIA, “A Style-Based Generator Architecture for Generative Adversarial Networks Timo Aila NVIDIA”, Accessed: Jul.11, 2022. [Online]. Available: https://github.com/NVlabs/stylegan [18]J. Y.Zhu, T.Park, P.Isola, andA. A.Efros, “Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2017-Octob, pp. 2242–2251, 2017, doi: 10.1109/ICCV.2017.244. [19]H.Liu, Z.Wan, W.Huang, Y.Song, X.Han, andJ.Liao, “PD-GAN: Probabilistic Diverse GAN for Image Inpainting,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 9367–9376, 2021, doi: 10.1109/CVPR46437.2021.00925. [20]M.Zhu, P.Pan, W.Chen, andY.Yang, “DM-GAN: Dynamic memory generative adversarial networks for text-to-image synthesis,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2019-June, pp. 5795–5803, 2019, doi: 10.1109/CVPR.2019.00595. [21]M.Arjovsky, S.Chintala, andL.Bottou, “Wasserstein GAN”. [22]X.Mao, Q.Li, H.Xie, R. Y. K.Lau, Z. W.¶4, andS. P.Smolley, “Least Squares Generative Adversarial Networks,” 2017. [23]J. H.Lim andJ. C.Ye, “Geometric GAN”. [24]H.Zhang, Z.Zhang, A.Odena, H.Lee, andG.Research, “CONSISTENCY REGULARIZATION FOR GENERATIVE ADVERSARIAL NETWORKS”. [25]T.Miyato, T.Kataoka, M.Koyama, andY.Yoshida, “SPECTRAL NORMALIZATION FOR GENERATIVE ADVERSARIAL NETWORKS”, Accessed: Jul.12, 2022. [Online]. Available: https://github.com/pfnet-research/sngan_ [26]I.Gulrajani, F.Ahmed, M.Arjovsky, V.Dumoulin, andA.Courville, “Improved Training of Wasserstein GANs Montreal Institute for Learning Algorithms”, Accessed: Jul.12, 2022. [Online]. Available: https://github.com/igul222/improved_wgan_training. [27]A.Van DenOord, N.Kalchbrenner, O.Vinyals, L.Espeholt, A.Graves, andK.Kavukcuoglu, “Conditional image generation with PixelCNN decoders,” Adv. Neural Inf. Process. Syst., pp. 4797–4805, 2016. [28]D. P.Kingma andM.Welling, “Auto-encoding variational bayes,” 2nd Int. Conf. Learn. Represent. ICLR 2014 - Conf. Track Proc., no. Ml, pp. 1–14, 2014. [29]A.Nguyen, J.Clune, Y.Bengio, A.Dosovitskiy, andJ.Yosinski, “Plug and play generative networks: Conditional iterative generation of images in latent space,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, no. 1, pp. 3510–3520, 2017, doi: 10.1109/CVPR.2017.374. [30]E.Mansimov, E.Parisotto, J. L.Ba, andR.Salakhutdinov, “Generating images from captions with attention,” 4th Int. Conf. Learn. Represent. ICLR 2016 - Conf. Track Proc., pp. 1–12, 2016. [31]S.Reed, Z.Akata, X.Yan, L.Logeswaran, B.Schiele, andH.Lee, “Generative adversarial text to image synthesis,” 33rd Int. Conf. Mach. Learn. ICML 2016, vol. 3, pp. 1681–1690, 2016. [32]P.Khosla et al., “Supervised contrastive learning,” Adv. Neural Inf. Process. Syst., vol. 2020-Decem, no. NeurIPS, pp. 1–23, 2020. [33]X.Liu et al., “Self-supervised Learning: Generative or Contrastive,” IEEE Trans. Knowl. Data Eng., pp. 1–24, 2021, doi: 10.1109/TKDE.2021.3090866. [34]M. E.Peters et al., “Deep contextualized word representations,” NAACL HLT 2018 - 2018 Conf. North Am. Chapter Assoc. Comput. Linguist. Hum. Lang. Technol. - Proc. Conf., vol. 1, pp. 2227–2237, 2018, doi: 10.18653/v1/n18-1202. [35]T.Chen, S.Kornblith, M.Norouzi, andG.Hinton, “A simple framework for contrastive learning of visual representations,” 37th Int. Conf. Mach. Learn. ICML 2020, vol. PartF16814, no. Figure 1, pp. 1575–1585, 2020. [36]K.He, H.Fan, Y.Wu, S.Xie, andR.Girshick, “Momentum Contrast for Unsupervised Visual Representation Learning,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 9726–9735, 2020, doi: 10.1109/CVPR42600.2020.00975. [37]J. B.Grill et al., “Bootstrap your own latent a new approach to self-supervised learning,” Adv. Neural Inf. Process. Syst., vol. 2020-Decem, 2020. [38]T.Gao, X.Yao, andD.Chen, “SimCSE: Simple Contrastive Learning of Sentence Embeddings,” pp. 6894–6910, 2021, doi: 10.18653/v1/2021.emnlp-main.552. [39]Y.Tian, C.Sun, B.Poole, D.Krishnan, C.Schmid, andP.Isola, “What makes for good views for contrastive learning?,” Adv. Neural Inf. Process. Syst., vol. 2020-Decem, no. NeurIPS, pp. 1–24, 2020. [40]Y.Kalantidis, M. B.Sariyildiz, N.Pion, P.Weinzaepfel, andD.Larlus, “Hard negative mixing for contrastive learning,” Adv. Neural Inf. Process. Syst., vol. 2020-Decem, no. NeurIPS, pp. 1–21, 2020. [41]Y.Deng, J.Yang, D.Chen, F.Wen, andX.Tong, “Disentangled and Controllable Face Image Generation via 3D Imitative-Contrastive Learning,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 5153–5162, 2020, doi: 10.1109/CVPR42600.2020.00520. [42]Z.Zhao, Z.Zhang, T.Chen, S.Singh, andH.Zhang, “Image Augmentations for GAN Training,” 2020, [Online]. Available: http://arxiv.org/abs/2006.02595 [43]F.Qiao, N.Yao, Z.Jiao, Z.Li, H.Chen, andH.Wang, “Geometry-Contrastive GAN for Facial Expression Transfer,” 2018, [Online]. Available: http://arxiv.org/abs/1802.01822 [44]K. S.Lee, N. T.Tran, andN. M.Cheung, “InfoMax-GAN: Improved adversarial image generation via information maximization and contrastive learning,” Proc. - 2021 IEEE Winter Conf. Appl. Comput. Vision, WACV 2021, pp. 3941–3951, 2021, doi: 10.1109/WACV48630.2021.00399. [45]Y.Yan, R.Li, S.Wang, F.Zhang, W.Wu, andW.Xu, “ConSERT: A contrastive framework for self-supervised sentence representation transfer,” ACL-IJCNLP 2021 - 59th Annu. Meet. Assoc. Comput. Linguist. 11th Int. Jt. Conf. Nat. Lang. Process. Proc. Conf., pp. 5065–5075, 2021, doi: 10.18653/v1/2021.acl-long.393. [46]J.Gao, D.He, X.Tan, T.Qin, L.Wang, andT. Y.Liu, “Representation degeneration problem in training natural language generation models,” 7th Int. Conf. Learn. Represent. ICLR 2019, pp. 1–14, 2019. [47]Q. G.Lingxiao Wang, Jing Huang, Kevin Huang, Ziniu Hu, Guangtao Wang, “Improving neural language generation with spectrum control.,” Iclr, no. 2018, pp. 1–16, 2020. [48]B.Li, H.Zhou, J.He, M.Wang, Y.Yang, andL.Li, “On the sentence embeddings from pre-trained language models,” EMNLP 2020 - 2020 Conf. Empir. Methods Nat. Lang. Process. Proc. Conf., pp. 9119–9130, 2020, doi: 10.18653/v1/2020.emnlp-main.733. [49]X.Zhang, J.Zhao, andY.Lecun, “Character-level convolutional networks for text classification,” Adv. Neural Inf. Process. Syst., vol. 2015-Janua, pp. 649–657, 2015. [50]Z.Wu, S.Wang, J.Gu, M.Khabsa, F.Sun, andH.Ma, “CLEAR: Contrastive Learning for Sentence Representation,” 2020, [Online]. Available: http://arxiv.org/abs/2012.15466 [51]Y.Meng et al., “COCO-LM: Correcting and Contrasting Text Sequences for Language Model Pretraining,” no. NeurIPS, pp. 1–13, 2021, [Online]. Available: http://arxiv.org/abs/2102.08473 [52]A.Vaswani et al., “Attention Is All You Need”. [53]S.González-Carvajal andE. C.Garrido-Merchán, “Comparing BERT against traditional machine learning text classification,” no. Ml, 2020, [Online]. Available: http://arxiv.org/abs/2005.13012 [54]Y.Liu et al., “RoBERTa: A Robustly Optimized BERT Pretraining Approach,” no. 1, 2019, [Online]. Available: http://arxiv.org/abs/1907.11692 [55]X.Li, L.Bing, W.Zhang, andW.Lam, “Exploiting bert for end-to-end aspect-based sentiment analysis_,” W-NUT@EMNLP 2019 - 5th Work. Noisy User-Generated Text, Proc., pp. 34–41, 2019, doi: 10.18653/v1/d19-5505. [56]F.Schroff, D.Kalenichenko, andJ.Philbin, “FaceNet: A unified embedding for face recognition and clustering,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 07-12-June, pp. 815–823, 2015, doi: 10.1109/CVPR.2015.7298682. [57]P.Welinder et al., “Caltech-UCSD Birds 200”, Accessed: Jul.11, 2022. [Online]. Available: http://www.flickr.com/ [58]T.Salimans, I.Goodfellow, W.Zaremba, V.Cheung, A.Radford, andX.Chen, “Improved Techniques for Training GANs”, Accessed: Jul.11, 2022. [Online]. Available: https://github.com/openai/ [59]M.Heusel, H.Ramsauer, T.Unterthiner, B.Nessler, andS.Hochreiter, “GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium”.
|