TY - JOUR
T1 - Multi-task learning for analyzing and sorting large databases of sequential data
AU - Ni, Kai
AU - Paisley, John
AU - Carin, Lawrence
AU - Dunson, David
N1 - Generated from Scopus record by KAUST IRTS on 2021-02-09
PY - 2008/8/1
Y1 - 2008/8/1
N2 - A new hierarchical nonparametric Bayesian framework is proposed for the problem of multi-task learning (MTL) with sequential data. The models for multiple tasks, each characterized by sequential data, are learned jointly, and the intertask relationships are obtained simultaneously. This MTL setting is used to analyze and sort large databases composed of sequential data, such as music clips. Within each data set, we represent the sequential data with an infinite hidden Markov model (iHMM), avoiding the problem of model selection (selecting a number of states). Across the data sets, the multiple iHMMs are learned jointly in a MTL setting, employing a nested Dirichlet process (nDP). The nDP-iHMM MTL method allows simultaneous task-level and data-level clustering, with which the individual iHMMs are enhanced and the between-task similarities are learned. Therefore, in addition to improved learning of each of the models via appropriate data sharing, the learned sharing mechanisms are used to infer interdata relationships of interest for data search. Specifically, the MTL-learned task-level sharing mechanisms are used to define the affinity matrix in a graph-diffusion sorting framework. To speed up the MCMC inference for large databases, the nDP-iHMM is truncated to yield a nested Dirichlet-distribution based HMM representation, which accommodates fast variational Bayesian (VB) analysis for large-scale inference, and the effectiveness of the framework is demonstrated using a database composed of 2500 digital music pieces. © 2008 IEEE.
AB - A new hierarchical nonparametric Bayesian framework is proposed for the problem of multi-task learning (MTL) with sequential data. The models for multiple tasks, each characterized by sequential data, are learned jointly, and the intertask relationships are obtained simultaneously. This MTL setting is used to analyze and sort large databases composed of sequential data, such as music clips. Within each data set, we represent the sequential data with an infinite hidden Markov model (iHMM), avoiding the problem of model selection (selecting a number of states). Across the data sets, the multiple iHMMs are learned jointly in a MTL setting, employing a nested Dirichlet process (nDP). The nDP-iHMM MTL method allows simultaneous task-level and data-level clustering, with which the individual iHMMs are enhanced and the between-task similarities are learned. Therefore, in addition to improved learning of each of the models via appropriate data sharing, the learned sharing mechanisms are used to infer interdata relationships of interest for data search. Specifically, the MTL-learned task-level sharing mechanisms are used to define the affinity matrix in a graph-diffusion sorting framework. To speed up the MCMC inference for large databases, the nDP-iHMM is truncated to yield a nested Dirichlet-distribution based HMM representation, which accommodates fast variational Bayesian (VB) analysis for large-scale inference, and the effectiveness of the framework is demonstrated using a database composed of 2500 digital music pieces. © 2008 IEEE.
UR - http://ieeexplore.ieee.org/document/4523930/
UR - http://www.scopus.com/inward/record.url?scp=48849102964&partnerID=8YFLogxK
U2 - 10.1109/TSP.2008.924798
DO - 10.1109/TSP.2008.924798
M3 - Article
SN - 1053-587X
VL - 56
SP - 3918
EP - 3931
JO - IEEE Transactions on Signal Processing
JF - IEEE Transactions on Signal Processing
IS - 8 II
ER -