Jiang, Ziping and Wang, Yunpeng and Li, Chang-Tsun and Angelov, Plamen and Jiang, Richard (2023) Delve into Neural Activations : Towards Understanding Dying Neurons. IEEE Transactions on Artificial Intelligence, 4 (4): 4. pp. 959-971. ISSN 2691-4581
Activation_TAI_Final.pdf - Accepted Version
Available under License Creative Commons Attribution-NonCommercial.
Download (7MB)
Abstract
Theoretically, a deep neuron network with nonlinear activation is able to approximate any function, while empirically the performance of the model with different activations varies widely. In this work, we investigate the expressivity of the network from an activation perspective. In particular, we introduce a generalized activation region/pattern to describe the functional relationship of the model with an arbitrary activation function and illustrate its fundamental properties. We then propose a metric named pattern similarity to evaluate the practical expressivity of neuron networks regarding datasets based on the neuron level reaction toward the input. We find an undocumented dying neuron issue that the postactivation value of most neurons remain in the same region for data with different labels, implying that the expressivity of the network with certain activations is greatly constrained. For instance, around 80% of postactivation values of a well-trained Sigmoid net or Tanh net are clustered in the same region given any test sample. This means most of the neurons fail to provide any useful information in distinguishing the data with different labels, suggesting that the practical expressivity of those networks is far below the theoretical. By evaluating our metrics and the test accuracy of the model, we show that the seriousness of the dying neuron issue is highly related to the model performance. At last, we also discussed the cause of the dying neuron issue, providing an explanation of the model performance gap caused by the choice of activation.