Explaining Deep Learning Models Through Rule-Based Approximation and Visualization

Almeida Soares, Eduardo and Angelov, Plamen and Costa, Bruno and Castro, Marcos and Nageshrao, Subramanya and Filev, Dimitar (2021) Explaining Deep Learning Models Through Rule-Based Approximation and Visualization. IEEE Transactions on Fuzzy Systems, 29 (8). pp. 2399-2407. ISSN 1063-6706

[thumbnail of Explaining Deep Learning Models Through Rule-Based Approximation and Visualization]
Text (Explaining Deep Learning Models Through Rule-Based Approximation and Visualization)
Explaining_Deep_Learning_Final.pdf - Accepted Version
Available under License Creative Commons Attribution-NonCommercial.

Download (853kB)

Abstract

This paper describes a novel approach to the problem of developing explainable machine learning models. We consider a Deep Reinforcement Learning (DRL) model representing a highway path planning policy for autonomous highway driving. The model constitutes a mapping from the continuous multidimensional state space characterizing vehicle positions and velocities to a discrete set of actions in longitudinal and lateral direction. It is obtained by applying a customized version of the Double Deep Q-Network (DDQN) learning algorithm. The main idea is to approximate the DRL model with a set of IF…THEN rules that provide an alternative interpretable model, which is further enhanced by visualizing the rules. This concept is rationalized by the universal approximation properties of the rule-based models with fuzzy predicates. The proposed approach includes a learning engine composed of 0-order fuzzy rules, which generalize locally around the prototypes by using multivariate function models. The adjacent (in the data space) prototypes, which correspond to the same action are further grouped and merged into so-called "MegaClouds" reducing significantly the number of fuzzy rules. The input selection method is based on ranking the density of the individual inputs. Experimental results show that the specific DRL agent can be interpreted by approximating with families of rules of different granularity. The method is computationally efficient and can be potentially extended to addressing the explainability of the broader set of fully connected deep neural network models

Item Type:
Journal Article
Journal or Publication Title:
IEEE Transactions on Fuzzy Systems
Additional Information:
©2020 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Uncontrolled Keywords:
/dk/atira/pure/subjectarea/asjc/1700/1702
Subjects:
?? deep reinforcement learningexplainable airule-based modelsprototype- and density-based modelsdensity-based input selectionautonomous drivingartificial intelligencecomputational theory and mathematicsapplied mathematicscontrol and systems engineering ??
ID Code:
144339
Deposited By:
Deposited On:
05 Jun 2020 10:35
Refereed?:
Yes
Published?:
Published
Last Modified:
27 Feb 2024 01:39