DOI QR코드

DOI QR Code

A study on the waveform-based end-to-end deep convolutional neural network for weakly supervised sound event detection

약지도 음향 이벤트 검출을 위한 파형 기반의 종단간 심층 콘볼루션 신경망에 대한 연구

  • 이석진 (경북대학교 전자공학부) ;
  • 김민한 (경북대학교 전자공학부) ;
  • 정영호 (한국전자통신연구원 미디어부호화연구실)
  • Received : 2019.11.13
  • Accepted : 2019.12.13
  • Published : 2020.01.31

Abstract

In this paper, the deep convolutional neural network for sound event detection is studied. Especially, the end-to-end neural network, which generates the detection results from the input audio waveform, is studied for weakly supervised problem that includes weakly-labeled and unlabeled dataset. The proposed system is based on the network structure that consists of deeply-stacked 1-dimensional convolutional neural networks, and enhanced by the skip connection and gating mechanism. Additionally, the proposed system is enhanced by the sound event detection and post processings, and the training step using the mean-teacher model is added to deal with the weakly supervised data. The proposed system was evaluated by the Detection and Classification of Acoustic Scenes and Events (DCASE) 2019 Task 4 dataset, and the result shows that the proposed system has F1-scores of 54 % (segment-based) and 32 % (event-based).

본 논문에서는 음향 이벤트 검출을 위한 심층 신경망에 대한 연구를 진행하였다. 특히 약하게 표기된 데이터 및 표기되지 않은 훈련 데이터를 포함하는 약지도 문제에 대하여, 입력 오디오 파형으로부터 이벤트 검출 결과를 얻어내는 종단간 신경망을 구축하는 연구를 진행하였다. 본 연구에서 제안하는 시스템은 1차원 콘볼루션 신경망을 깊게 적층하는 구조를 기반으로 하였으며, 도약 연결 및 게이팅 메커니즘 등의 추가적인 구조를 통해 성능을 개선하였다. 또한 음향 구간 검출 및 후처리를 통하여 성능을 향상시켰으며, 약지도 데이터를 다루기 위하여 평균-교사 모델을 적용하여 학습하는 과정을 도입하였다. 본 연구에서 고안된 시스템을 Detection and Classification of Acoustic Scenes and Events(DCASE) 2019 Task 4 데이터를 이용하여 평가하였으며, 그 결과 약 54 %의 구간-기반 F1-score 및 32%의 이벤트-기반 F1-score를 얻을 수 있었다.

Keywords

References

  1. D. Barchiesi, D. Giannoulis, D. Stowell, and M. D. Plumbley, "Acoustic scene classification: Classifying environments from the sounds they produce," IEEE Signal Process. Mag. 32, 16-34 (2015).
  2. E. Cakir, T. Heittola, H. Huttunen, and T. Virtanen, "Polyphonic sound event detection using multi label deep neural networks," Proc. IJCNN. 1-7 (2015).
  3. N. Turpault, R. Serizel, A. P. Shah, and J. Salamon, "Sound event detection in domestic environments with weakly labeled data and soundscape synthesis," Proc. 2019 DCASE Workshop, 253-257 (2019).
  4. J. Lee, J. Park, K. Kim, and J. Nam, "Samplecnn: End-to-end deep convolutional neural networks using very small filters for music classification," Applied Sciences, 8, 150 (2018). https://doi.org/10.3390/app8010150
  5. Y. Tokozume and T. Harada, "Learning environmental sounds with end-to-end convolutional neural network," Proc. ICASSP. 2721-2725 (2017).
  6. S. Chu, S. Narayanan, C. -C. J. Kuo, and M. J. Mataric, "Where am I? scene recognition for mobile robots using audio features," Proc. IEEE Intern. Conf. Multimedia and Expo. 885-888 (2006).
  7. J. -J. Aucouturier, B. Defreville, and F. Pachet, "The bag-of-frames approach to audio pattern recognition: A sufficient model for urban soundscapes but not for polyphonic music," J. Acoust. Soc. Am. 122, 881-891 (2007). https://doi.org/10.1121/1.2750160
  8. J. Salamon and J. P. Bello, "Deep convolutional neural networks and data augmentation for environmental sound classification," IEEE Sig. Proc. Lett. 24, 279-283 (2017). https://doi.org/10.1109/LSP.2017.2657381
  9. R. Raj, S. Waldekar, and G. Saha, "Large-scale weakly labelled semi-supervised CQT based sound event detection in domestic environments," DCASE2018 Challenge Tech. Rep., 2018.
  10. K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," Proc. IEEE Conf. Comput. Vis. Pattern Recognit, 770-778 (2016).
  11. S. Woo, J. Park, J. -Y. Lee, and I. S. Kweon, "CBAM: convolutional block attention module," Proc. ECCV. 3-19 (2018).
  12. J. Wagner, D. Schiller, A. Seiderer, and E. Andre, "Deep learning in paralinguistic recognition tasks: are hand-crafted features still relevant?," Proc. Interspeech, 147-151 (2018).
  13. Q. Zhou and Z. Feng, "Robust sound event detection through noise estimation and source separation using NMF," Proc. DCASE 2017 (2017).
  14. T. Hayashi, S. Watanabe, T. Toda, T. Hori, J. L. Roux, and K. Takeda, "BLSTM-HMM hybrid system combined with sound activity detection network for polyphonic sound event detection," Proc. ICASSP. 776-770 (2017).
  15. L. Jiakai and P. Shanghai, "Mean teacher convolution system for DCASE 2018 task 4," DCASE 2018 Challenge Tech. Rep., 2018.
  16. D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980 (2014).