Measuring the performance of a given classifier is not a straightforward or easy task. Depending on the application, the overall classification rate may not be sufficient if one, or more, of the classes fail in prediction. This problem is also reflected in the feature selection process, especially when a wrapper method is used. Cohen's kappa coefficient is a statistical measure of inter-rater agreement for qualitative items. It is generally thought to be a more robust measure than simple percent agreement calculation, since it takes into account the agreement occurring by chance. Considering that kappa is a more conservative measure, then its use in wrapper feature selection is suitable to test the performance of the models. This paper proposes the use of the kappa measure as an evaluation measure in a feature selection wrapper approach. In the proposed approach, fuzzy models are used to test the feature subsets and fuzzy criteria are used to formulate the feature selection problem. Results show that using the kappa measure leads to more accurate classifiers, and therefore it leads to feature subset solutions with more relevant features.

Additional Metadata
Persistent URL,
Conference 2010 6th IEEE World Congress on Computational Intelligence, WCCI 2010
Vieira, S.M, Kaymak, U, & Da Costa Sousa, J.M. (2010). Cohen's kappa coefficient as a performance measure for feature selection. Presented at the 2010 6th IEEE World Congress on Computational Intelligence, WCCI 2010. doi:10.1109/FUZZY.2010.5584447