Design and Implementation of a Sound Classification System for Context-Aware Mobile Computing


KIPS Transactions on Software and Data Engineering, Vol. 3, No. 2, pp. 81-86, Feb. 2014
10.3745/KTSDE.2014.3.2.81,   PDF Download:

Abstract

In this paper, we present an effective sound classification system for recognizing the real-time context of a smartphone user. Our system avoids unnecessary consumption of limited computational resource by filtering both silence and white noise out of input sound data in the pre-processing step. It also improves the classification performance on low energy-level sounds by amplifying them as pre-processing. Moreover, for efficient learning and application of HMM classification models, our system executes the dimension reduction and discretization on the feature vectors through k-means clustering. We collected a large amount of 8 different type sound data from daily life in a university research building and then conducted experiments using them. Through these experiments, our system showed high classification performance.


Statistics
Show / Hide Statistics

Statistics (Cumulative Counts from September 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.


Cite this article
[IEEE Style]
J. H. Kim, S. J. Lee, I. C. Kim, "Design and Implementation of a Sound Classification System for Context-Aware Mobile Computing," KIPS Transactions on Software and Data Engineering, vol. 3, no. 2, pp. 81-86, 2014. DOI: 10.3745/KTSDE.2014.3.2.81.

[ACM Style]
Joo Hee Kim, Seok Jun Lee, and In Cheol Kim. 2014. Design and Implementation of a Sound Classification System for Context-Aware Mobile Computing. KIPS Transactions on Software and Data Engineering, 3, 2, (2014), 81-86. DOI: 10.3745/KTSDE.2014.3.2.81.