Contents Online
Statistics and Its Interface
Volume 11 (2018)
Number 3
Discussion on “Doubly sparsity kernel learning with automatic variable selection and data extraction”
Pages: 425 – 428
DOI: https://dx.doi.org/10.4310/SII.2018.v11.n3.a4
Author
Abstract
Kernel methods provide powerful and flexible tools for nonlinear learning in high dimensional data analysis, but feature selection remains a challenge in kernel learning. The proposed DOSK method provides a new unified framework to implement kernel methods while automatically selecting important variables and identifying a subset of parsimonious knots at the same time. A double penalty is employed to encourage sparsity in both feature weights and representer coefficients. The authors have presented the computational algorithm and as well as theoretical properties of the DOSK method. In this discussion, we first highlight the DOSK’s major contributions to the machine learning toolbox. Then we discuss its connections to other nonparametric methods in the literature and point out some possible future research directions.
Keywords
reproducing kernel Hilbert space, kernel methods, variable selection, high dimensional data analysis, penalty
2010 Mathematics Subject Classification
Primary 62F07, 62H20. Secondary 62J05.
This research is supported in part by National Science Foundation grants 1740858 and DMS-1418172, National Institute of Health grant R01 DA044985, and Natural Science Foundation of China grant NSFC-11571009.
Received 9 June 2018
Published 17 September 2018