Automatic Tag Classification from Sound Data for Graph-Based Music Recommendation


KIPS Transactions on Software and Data Engineering, Vol. 10, No. 10, pp. 399-406, Oct. 2021
https://doi.org/10.3745/KTSDE.2021.10.10.399,   PDF Download:
Keywords: Music Recommendation, Automatic Tag Classification, Sound Data
Abstract

With the steady growth of the content industry, the need for research that automatically recommending content suitable for individual tastes is increasing. In order to improve the accuracy of automatic content recommendation, it is needed to fuse existing recommendation techniques using users' preference history for contents along with recommendation techniques using content metadata or features extracted from the content itself. In this work, we propose a new graph-based music recommendation method which learns an LSTM-based classification model to automatically extract appropriate tagging words from sound data and apply the extracted tagging words together with the users’ preferred music lists and music metadata to graph-based music recommendation. Experimental results show that the proposed method outperforms existing recommendation methods in terms of the recommendation accuracy.


Statistics
Show / Hide Statistics

Statistics (Cumulative Counts from September 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.


Cite this article
[IEEE Style]
T. Kim, H. Kim, S. Lee, "Automatic Tag Classification from Sound Data for Graph-Based Music Recommendation," KIPS Transactions on Software and Data Engineering, vol. 10, no. 10, pp. 399-406, 2021. DOI: https://doi.org/10.3745/KTSDE.2021.10.10.399.

[ACM Style]
Taejin Kim, Heechan Kim, and Soowon Lee. 2021. Automatic Tag Classification from Sound Data for Graph-Based Music Recommendation. KIPS Transactions on Software and Data Engineering, 10, 10, (2021), 399-406. DOI: https://doi.org/10.3745/KTSDE.2021.10.10.399.