The Computational Complexity of Counting Linear Regions in ReLU Neural Networks
Abstract
The study examines different definitions of linear regions in ReLU neural networks, analyzes their computational complexity, and provides polynomial space algorithms for some definitions.
An established measure of the expressive power of a given ReLU neural network is the number of linear regions into which it partitions the input space. There exist many different, non-equivalent definitions of what a linear region actually is. We systematically assess which papers use which definitions and discuss how they relate to each other. We then analyze the computational complexity of counting the number of such regions for the various definitions. Generally, this turns out to be an intractable problem. We prove NP- and #P-hardness results already for networks with one hidden layer and strong hardness of approximation results for two or more hidden layers. Finally, on the algorithmic side, we demonstrate that counting linear regions can at least be achieved in polynomial space for some common definitions.
Community
Also (if you can clamber over a hurdle) there is a switching viewpoint on ReLU:
https://sciencelimelight.blogspot.com/2025/08/mental-framing-problems-viewing-relu-as.html
Or more simply:
https://sciencelimelight.blogspot.com/2025/08/explaining-relu-as-switch-another-way.html
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper