Weighted Sums of Random Kitchen Sinks: Replacing minimization with randomization in learning

NeurIPS 2008  ·  Ali Rahimi, Benjamin Recht ·

Randomized neural networks are immortalized in this AI Koan: In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6. ``What are you doing?'' asked Minsky. ``I am training a randomly wired neural net to play tic-tac-toe,'' Sussman replied. ``Why is the net wired randomly?'' asked Minsky. Sussman replied, ``I do not want it to have any preconceptions of how to play.'' Minsky then shut his eyes. ``Why do you close your eyes?'' Sussman asked his teacher. ``So that the room will be empty,'' replied Minsky. At that moment, Sussman was enlightened. We analyze shallow random networks with the help of concentration of measure inequalities. Specifically, we consider architectures that compute a weighted sum of their inputs after passing them through a bank of arbitrary randomized nonlinearities. We identify conditions under which these networks exhibit good classification performance, and bound their test error in terms of the size of the dataset and the number of random nonlinearities.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here