Multi-Activation Hidden Units for Neural Networks with Random Weights
Abstract
Single layer feedforward networks with random weights are successful in a variety of classification and regression problems. These networks are known for their non-iterative and fast training algorithms. A major drawback of these networks is that they require a large number of hidden units. In this paper, we propose the use of multi-activation hidden units. Such units increase the number of tunable parameters and enable formation of complex decision surfaces, without increasing the number of hidden units. We experimentally show that multi-activation hidden units can be used either to improve the classification accuracy, or to reduce computations.
- Publication:
-
arXiv e-prints
- Pub Date:
- September 2020
- DOI:
- 10.48550/arXiv.2009.08932
- arXiv:
- arXiv:2009.08932
- Bibcode:
- 2020arXiv200908932P
- Keywords:
-
- Computer Science - Neural and Evolutionary Computing;
- Computer Science - Machine Learning
- E-Print:
- 4 pages, 4 figures. arXiv admin note: substantial text overlap with arXiv:2008.10425