@article{M6B4FD55E, title = "Enhanced Sound Signal Based Sound-Event Classification", journal = "KIPS Transactions on Software and Data Engineering", year = "2019", issn = "2287-5905", doi = "10.3745/KTSDE.2019.8.5.193", author = "Yongju Cho, Jonguk Lee, Daihee Park, Yongwha Chung", keywords = "Noise Robustness, Sound Signal Generation, End-to-End Architecture, Deep Learning", abstract = "The explosion of data due to the improvement of sensor technology and computing performance has become the basis for analyzing the situation in the industrial fields, and various attempts to detect events based on such data are increasing recently. In particular, sound signals collected from sensors are used as important information to classify events in various application fields as an advantage of efficiently collecting field information at a relatively low cost. However, the performance of sound-event classification in the field cannot be guaranteed if noise can not be removed. That is, in order to implement a system that can be practically applied, robust performance should be guaranteed even in various noise conditions. In this study, we propose a system that can classify the sound event after generating the enhanced sound signal based on the deep learning algorithm. Especially, to remove noise from the sound signal itself, the enhanced sound data against the noise is generated using SEGAN applied to the GAN with a VAE technique. Then, an end-to-end based sound-event classification system is designed to classify the sound events using the enhanced sound signal as input data of CNN structure without a data conversion process. The performance of the proposed method was verified experimentally using sound data obtained from the industrial field, and the f1 score of 99.29% (railway industry) and 97.80% (livestock industry) was confirmed." }