Gu, Xiaowei and Angelov, Plamen (2020) Highly interpretable hierarchical deep rule-based classifier. Applied Soft Computing, 92. ISSN 1568-4946
ASOC_106310_edit_report.pdf - Accepted Version
Available under License Creative Commons Attribution-NonCommercial-NoDerivs.
Download (1MB)
Abstract
Pioneering the traditional fuzzy rule-based (FRB) systems, deep rule-based (DRB) classifiers are able to offer both human-level performance and transparent system structure on image classification problems by integrating zero-order fuzzy rule base with a multi-layer image-processing architecture that is typical for deep learning. Nonetheless, it is frequently observed that the inner structure of DRB can become over sophisticated and not interpretable for humans when applied to large-scale, complex problems. To tackle the issue, one feasible solution is to construct a tree structural classification model by aggregating the possibly huge number of prototypes identified from data into a much smaller number of more descriptive and highly abstract ones. Therefore, in this paper, we present a novel hierarchical deep rule-based (H-DRB) approach that is capable of summarizing the less descriptive raw prototypes into highly generalized ones and self-arranging them into a hierarchical prototype-based structure according to their descriptive abilities. By doing so, H-DRB can offer high-level performance and, most importantly, full transparency and human-interpretability on various problems including large-scale ones. The proposed concept and generical principles are verified through numerical experiments based on a wide variety of popular benchmark image sets. Numerical results demonstrate that the promise of H-DRB.