Abstract
The purpose of this research is to jointly learn multiple classification tasks by appropriately sharing information between similar tasks. In this setting, examples of different tasks include the discrimination of targets from non-targets by different sonars or by the same sonar operating in sufficiently different environments. This is known as multi-task learning (MTL) and is accomplished via a Bayesian approach whereby the learned parameters for classifiers of similar tasks are drawn from a common prior. To learn which tasks are similar and the appropriate priors a Dirichlet process is employed and solved using mean field variational Bayesian inference. The result is that for many real-world instances where training data is limited MTL exhibits a significant improvement over both learning individual classifiers for each task as well as pooling all data and training one overall classifier. The performance of this method is demonstrated on simulated data and experimental data from multiple imaging sonars operating over multiple environments.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of SPIE - The International Society for Optical Engineering |
DOIs | |
State | Published - Nov 15 2007 |
Externally published | Yes |