Review of Optimization in Improving Extreme Learning Machine
DOI:
https://doi.org/10.4108/eai.17-9-2021.170960Keywords:
Extreme learning machine (ELM), Single-feedforward neural networks, Kernel functions, Sensitivity, Input weights and Activation biasAbstract
Now a days Extreme Learning Machine has gained a lot of interest because of its noteworthy qualities over single hidden-layer feedforward neural networks and the kernel functions. Even if ELM has many advantages, it has some potential shortcomings such as performance sensitivity to the underlying state of the hidden neurons, input weights and the choice of functions of activation. To overcome the limitations of traditional ELM, analysts have devised numerical methods to optimise specific parts of ELM in order to enhance ELM performance for a variety of complicated difficulties and applications. Hence through this study, we intend to study the different algorithms developed for optimizing the ELM to enhance its performance in the aspects of survey criteria such as datasets, algorithm, objectives, training time, accuracy, error rate and the hidden neurons. This study will help other researchers to find out the research issues that lowering the performance of the ELM.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
This is an open-access article distributed under the terms of the Creative Commons Attribution CC BY 3.0 license, which permits unlimited use, distribution, and reproduction in any medium so long as the original work is properly cited.