Linear Neurons and Their Learning Algorithms
Ying Liu

Abstract
In this paper, we introduce the concepts of Linear neurons, and new learning algorithms based on Linear neurons, with an explanation of the reasons behind these algorithms. First, we briefly review the Boltzmann Machine and the fact that the invariant distributions of the Boltzmann Machine generate Markov chains. We then review the θ-transformation and its completeness, i.e. any function can be expanded by θ-transformation. We further review ABM (Attrasoft Boltzmann Machine). The invariant distribution of the ABM is a θ-transformation; therefore, an ABM can simulate any distribution. We then discuss that the ABM algorithm is only the first algorithm in a family of new algorithms based on the θ-transformation. We introduce the simplest algorithm in this family based on Linear neurons. We also discuss the advantages of this algorithm: accuracy, stability, and low time complexity.

Full Text: PDF     DOI: 10.15640/jcsit.v6n2a1