TY - JOUR T1 - Improving Fidelity of Synthesized Voices Generated by Using GANs AU - Back, Moon-Ki AU - Yoon, Seung-Won AU - Lee, Sang-Baek AU - Lee, Kyu-Chul JO - KIPS Transactions on Software and Data Engineering PY - 2021 DA - 2021/1/30 DO - https://doi.org/10.3745/KTSDE.2021.10.1.9 KW - Generative Adversarial Networks KW - Fréchet Inception Distance KW - Fidelity Improvement KW - Synthesized Voice AB - Although Generative Adversarial Networks (GANs) have gained great popularity in computer vision and related fields, generating audio signals independently has yet to be presented. Unlike images, an audio signal is a sampled signal consisting of discrete samples, so it is not easy to learn the signals using CNN architectures, which is widely used in image generation tasks. In order to overcome this difficulty, GAN researchers proposed a strategy of applying time-frequency representations of audio to existing image-generating GANs. Following this strategy, we propose an improved method for increasing the fidelity of synthesized audio signals generated by using GANs. Our method is demonstrated on a public speech dataset, and evaluated by Fréchet Inception Distance (FID). When employing our method, the FID showed 10.504, but 11.973 as for the existing state of the art method (lower FID indicates better fidelity).