Gradient Descent Approach for Value-Based Weighting

KIPS Transactions on Software and Data Engineering, Vol. 17, No. 5, pp. 381-388, May. 2010
10.3745/KIPSTB.2010.17.5.381, Full Text:


Naive Bayesian learning has been widely used in many data mining applications, and it performs surprisingly well on many applications. However, due to the assumption that all attributes are equally important in naive Bayesian learning, the posterior probabilities estimated by naive Bayesian are sometimes poor. In this paper, we propose more fine-grained weighting methods, called value weighting, in the context of naive Bayesian learning. While the current weighting methods assign a weight to each attribute, we assign a weight to each attribute value. We investigate how the proposed value weighting effects the performance of naive Bayesian learning. We develop new methods, using gradient descent method, for both value weighting and feature weighting in the context of naive Bayesian. The performance of the proposed methods has been compared with the attribute weighting method and general Naive bayesian, and the value weighting method showed better in most cases.

Show / Hide Statistics

Statistics (Cumulative Counts from September 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.

Cite this article
[IEEE Style]
C. H. Lee and J. H. Bae, "Gradient Descent Approach for Value-Based Weighting," KIPS Journal B (2001 ~ 2012) , vol. 17, no. 5, pp. 381-388, 2010. DOI: 10.3745/KIPSTB.2010.17.5.381.

[ACM Style]
Chang Hwan Lee and Joo Hyun Bae. 2010. Gradient Descent Approach for Value-Based Weighting. KIPS Journal B (2001 ~ 2012) , 17, 5, (2010), 381-388. DOI: 10.3745/KIPSTB.2010.17.5.381.