Online learning of positive and negative prototypes with explanations based on kernel expansion
The issue of classification is still a topic of discussion in many current articles. Most of the models presented in the articles suffer from a lack of explanation for a reason comprehensible to humans. One way to create explainability is to separate the weights of the network into positive and negative parts based on the prototype. The positive part represents the weights of the correct class and the negative part represents the weights that are incorrectly assigned to that class. This network is called the winner-takes-all network based on the positive and negative Euclidean distance or ± ED-WTA. In this paper, using the kernel expansions and achieving local explainability, higher accuracy has been achieved in this field through nonlinear modeling. Methods in this paper will be presented to improve the temporal and spatial space of the algorithm. The article also uses the Nystrom method to approximate kernel to scale the algorithm against large datasets. Using this single-layer network and the Gaussian kernel function, 98.01% accuracy is obtained on the test data on the MNIST dataset, and it also explains the reasons of decision well with the input data using kernel expansions. Explainability has also been investigated on two classes FERET dataset.
- حق عضویت دریافتی صرف حمایت از نشریات عضو و نگهداری، تکمیل و توسعه مگیران میشود.
- پرداخت حق اشتراک و دانلود مقالات اجازه بازنشر آن در سایر رسانههای چاپی و دیجیتال را به کاربر نمیدهد.