Published November 2, 2021 | Version v1
Conference paper Open

Layer-wise relevance propagation based sample condensation for kernel machines

  • 1. University of Münster, Germany
  • 2. Sichuan University, China

Description

Kernel machines are a powerful class of methods for classification and regression. Making kernel machines fast and scalable to large data, however, is still a challenging problem due to the need of storing and operating on the Gram matrix. In this paper we propose a novel approach to sample condensation for kernel machines, preferably without impairing the classification performance. To our best knowledge, there is no previous work with the same goal reported in the literature. For this purpose we make use of the neural network interpretation of kernel machines. Explainable AI techniques, in particular the Layer-wise Relevance Propagation method, are used to measure the relevance (importance) of training samples. Given this relevance measure, a decremental strategy is proposed for sample condensation. Experimental results on three data sets show that our approach is able to achieve the goal of substantial reduction of the number of training samples.

Files

CAIP2021.pdf

Files (471.7 kB)

Name Size Download all
md5:51d340ee0119890a0366115b74bcde0c
471.7 kB Preview Download