Reduction of Approximate Rule based on Probabilistic Rough sets


The KIPS Transactions:PartD, Vol. 8, No. 3, pp. 203-210, Jun. 2001
10.3745/KIPSTD.2001.8.3.203,   PDF Download:

Abstract

These days data is being collected and accumulated in a wide variety of fields. Stored data itself is to be an information system which helps us to make decisions. An information system includes many kinds of necessary and unnecessary attribute. So many algorithms have been developed for finding useful patterns from the data and reasoning approximately new objects. We are interested in the simple and understandable rules that can represent useful patterns. In this paper we propose an algorithm which can reduce the information in the system to a minimum, based on a probabilistic rough set theory. The proposed algorithm uses a value that tolerates accuracy of classification. The tolerant value helps minimizing the necessary attribute which is needed to reason a new object by reducing conditional attributes. It has the advantage that it reduces the time of generalizing rules. We experiment a proposed algorithm with the IRIS data and Wisconsin Breast Cancer data. The experiment results show that this algorithm retrieves a small reduct, and minimizes the size of the rule under the tolerant classification rate.


Statistics
Show / Hide Statistics

Statistics (Cumulative Counts from September 1st, 2017)
Multiple requests among the same browser session are counted as one view.
If you mouse over a chart, the values of data points will be shown.


Cite this article
[IEEE Style]
E. A. Kwon and H. G. Kim, "Reduction of Approximate Rule based on Probabilistic Rough sets," The KIPS Transactions:PartD, vol. 8, no. 3, pp. 203-210, 2001. DOI: 10.3745/KIPSTD.2001.8.3.203.

[ACM Style]
Eun Ah Kwon and Hong Gi Kim. 2001. Reduction of Approximate Rule based on Probabilistic Rough sets. The KIPS Transactions:PartD, 8, 3, (2001), 203-210. DOI: 10.3745/KIPSTD.2001.8.3.203.