Computational nanotechnology
RUS  ENG    JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB  
General information
Latest issue
Archive

Search papers
Search references

RSS
Latest issue
Current issues
Archive issues
What is RSS



Comp. nanotechnol.:
Year:
Volume:
Issue:
Page:
Find






Personal entry:
Login:
Password:
Save password
Enter
Forgotten password?
Register


Computational nanotechnology, 2024, Volume 11, Issue 1, Pages 171–183
DOI: https://doi.org/10.33693/2313-223X-2024-11-1-171-183
(Mi cn471)
 

INFORMATICS AND INFORMATION PROCESSING

Non-iterative calculation of parameters of a linear classifier with a threshold activation function

Z. A. Ponimash, M. V. Potanin

FractalTech LLC
Abstract: The relevance of artificial intelligence (AI) systems is growing every year. AI is being introduced into various fields of activity. One of the main technologies used in AI is artificial neural networks (hereinafter referred to as NN). With the help of neural networks, a huge class of problems is solved, such as classification, regression, autoregression, clustering, noise reduction, creating a vector representation of objects, and others. In this work, we consider the simplest case of operation of one neuron with the Heaviside activation function, we also consider fast ways to train it, and we reduce the learning problem to the problem of finding the normal vector to the separating hyperplane and the displacement weight. One of the promising areas for training NN is non-iterative training, especially in the context of processing and analyzing high-dimensional data. This article discusses a method of non-iterative learning, which allows you to greatly (by 1–2 orders of magnitude) speed up the training of one neuron. The peculiarity of the approach is to determine the hyperplane separating two classes of objects in the feature space, without the need for repeated recalculation of weights, which is typical for traditional iterative methods. Within the framework of the study, special attention is paid to cases when the main axes of the ellipsoids describing the classes are parallel. The function pln is defined to calculate the distances between objects and the centers of their classes, based on which the non-normalized normal vector to the hyperplane and the displacement weight are calculated. In addition, we provide a comparison of our method with support vector machines and logistic regression.
Keywords: non-iterative learning, linear classifier with threshold activation function, static analysis, comparative analysis.
Document Type: Article
UDC: 004.8
Language: Russian
Citation: Z. A. Ponimash, M. V. Potanin, “Non-iterative calculation of parameters of a linear classifier with a threshold activation function”, Comp. nanotechnol., 11:1 (2024), 171–183
Citation in format AMSBIB
\Bibitem{PonPot24}
\by Z.~A.~Ponimash, M.~V.~Potanin
\paper Non-iterative calculation of parameters of a linear classifier with a threshold activation function
\jour Comp. nanotechnol.
\yr 2024
\vol 11
\issue 1
\pages 171--183
\mathnet{http://mi.mathnet.ru/cn471}
\crossref{https://doi.org/10.33693/2313-223X-2024-11-1-171-183}
Linking options:
  • https://www.mathnet.ru/eng/cn471
  • https://www.mathnet.ru/eng/cn/v11/i1/p171
  • Citing articles in Google Scholar: Russian citations, English citations
    Related articles in Google Scholar: Russian articles, English articles
    Computational nanotechnology
    Statistics & downloads:
    Abstract page:2
     
      Contact us:
     Terms of Use  Registration to the website  Logotypes © Steklov Mathematical Institute RAS, 2024