Approximation capabilities of neural networks on unbounded domains

21 Oct 2019  ·  Ming-Xi Wang, Yang Qu ·

In this paper, we prove that a shallow neural network with a monotone sigmoid, ReLU, ELU, Softplus, or LeakyReLU activation function can arbitrarily well approximate any L^p(p>=2) integrable functions defined on R*[0,1]^n. We also prove that a shallow neural network with a sigmoid, ReLU, ELU, Softplus, or LeakyReLU activation function expresses no nonzero integrable function defined on the Euclidean plane. Together with a recent result that the deep ReLU network can arbitrarily well approximate any integrable function on Euclidean spaces, we provide a new perspective on the advantage of multiple hidden layers in the context of ReLU networks. Lastly, we prove that the ReLU network with depth 3 is a universal approximator in L^p(R^n).

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods