Tags

Type your tag names separated by a space and hit enter

SVRG-MKL: A Fast and Scalable Multiple Kernel Learning Solution for Features Combination in Multi-Class Classification Problems.

Abstract

In this paper, we present a novel strategy to combine a set of compact descriptors to leverage an associated recognition task. We formulate the problem from a multiple kernel learning (MKL) perspective and solve it following a stochastic variance reduced gradient (SVRG) approach to address its scalability, currently an open issue. MKL models are ideal candidates to jointly learn the optimal combination of features along with its associated predictor. However, they are unable to scale beyond a dozen thousand of samples due to high computational and memory requirements, which severely limits their applicability. We propose SVRG-MKL, an MKL solution with inherent scalability properties that can optimally combine multiple descriptors involving millions of samples. Our solution takes place directly in the primal to avoid Gram matrices computation and memory allocation, whereas the optimization is performed with a proposed algorithm of linear complexity and hence computationally efficient. Our proposition builds upon recent progress in SVRG with the distinction that each kernel is treated differently during optimization, which results in a faster convergence than applying off-the-shelf SVRG into MKL. Extensive experimental validation conducted on several benchmarking data sets confirms a higher accuracy and a significant speedup of our solution. Our technique can be extended to other MKL problems, including visual search and transfer learning, as well as other formulations, such as group-sensitive (GMKL) and localized MKL (LMKL) in convex settings.

Pub Type(s)

Journal Article

Language

eng

PubMed ID

31283489

Citation

Alioscha-Perez, Mitchel, et al. "SVRG-MKL: a Fast and Scalable Multiple Kernel Learning Solution for Features Combination in Multi-Class Classification Problems." IEEE Transactions On Neural Networks and Learning Systems, 2019.
Alioscha-Perez M, Oveneke MC, Sahli H. SVRG-MKL: A Fast and Scalable Multiple Kernel Learning Solution for Features Combination in Multi-Class Classification Problems. IEEE Trans Neural Netw Learn Syst. 2019.
Alioscha-Perez, M., Oveneke, M. C., & Sahli, H. (2019). SVRG-MKL: A Fast and Scalable Multiple Kernel Learning Solution for Features Combination in Multi-Class Classification Problems. IEEE Transactions On Neural Networks and Learning Systems, doi:10.1109/TNNLS.2019.2922123.
Alioscha-Perez M, Oveneke MC, Sahli H. SVRG-MKL: a Fast and Scalable Multiple Kernel Learning Solution for Features Combination in Multi-Class Classification Problems. IEEE Trans Neural Netw Learn Syst. 2019 Jul 4; PubMed PMID: 31283489.
* Article titles in AMA citation format should be in sentence-case
TY - JOUR T1 - SVRG-MKL: A Fast and Scalable Multiple Kernel Learning Solution for Features Combination in Multi-Class Classification Problems. AU - Alioscha-Perez,Mitchel, AU - Oveneke,Meshia Cedric, AU - Sahli,Hichem, Y1 - 2019/07/04/ PY - 2019/7/10/pubmed PY - 2019/7/10/medline PY - 2019/7/9/entrez JF - IEEE transactions on neural networks and learning systems JO - IEEE Trans Neural Netw Learn Syst N2 - In this paper, we present a novel strategy to combine a set of compact descriptors to leverage an associated recognition task. We formulate the problem from a multiple kernel learning (MKL) perspective and solve it following a stochastic variance reduced gradient (SVRG) approach to address its scalability, currently an open issue. MKL models are ideal candidates to jointly learn the optimal combination of features along with its associated predictor. However, they are unable to scale beyond a dozen thousand of samples due to high computational and memory requirements, which severely limits their applicability. We propose SVRG-MKL, an MKL solution with inherent scalability properties that can optimally combine multiple descriptors involving millions of samples. Our solution takes place directly in the primal to avoid Gram matrices computation and memory allocation, whereas the optimization is performed with a proposed algorithm of linear complexity and hence computationally efficient. Our proposition builds upon recent progress in SVRG with the distinction that each kernel is treated differently during optimization, which results in a faster convergence than applying off-the-shelf SVRG into MKL. Extensive experimental validation conducted on several benchmarking data sets confirms a higher accuracy and a significant speedup of our solution. Our technique can be extended to other MKL problems, including visual search and transfer learning, as well as other formulations, such as group-sensitive (GMKL) and localized MKL (LMKL) in convex settings. SN - 2162-2388 UR - https://www.unboundmedicine.com/medline/citation/31283489/SVRG-MKL:_A_Fast_and_Scalable_Multiple_Kernel_Learning_Solution_for_Features_Combination_in_Multi-Class_Classification_Problems L2 - https://dx.doi.org/10.1109/TNNLS.2019.2922123 DB - PRIME DP - Unbound Medicine ER -