Abstract
Crowdsourcing is an economic and efficient strategy aimed at collecting annotations of data through an online platform. Crowd workers with different expertise are paid for their service, and the task requester usually has a limited budget. How to collect reliable annotations for multilabel data and how to compute the consensus within budget are an interesting and challenging, but rarely studied, problem. In this article, we propose a novel approach to accomplish active multilabel crowd consensus (AMCC). AMCC accounts for the commonality and individuality of workers and assumes that workers can be organized into different groups. Each group includes a set of workers who share a similar annotation behavior and label correlations. To achieve an effective multilabel consensus, AMCC models workers' annotations via a linear combination of commonality and individuality and reduces the impact of unreliable workers by assigning smaller weights to their groups. To collect reliable annotations with reduced cost, AMCC introduces an active crowdsourcing learning strategy that selects sample-label-worker triplets. In a triplet, the selected sample and label are the most informative for the consensus model, and the selected worker can reliably annotate the sample at a low cost. Our experimental results on multilabel data sets demonstrate the advantages of AMCC over state-of-the-art solutions on computing crowd consensus and on reducing the budget by choosing cost-effective triplets.
Original language | English (US) |
---|---|
Pages (from-to) | 1-12 |
Number of pages | 12 |
Journal | IEEE Transactions on Neural Networks and Learning Systems |
DOIs | |
State | Published - Apr 16 2020 |
Bibliographical note
KAUST Repository Item: Exported on 2020-10-01Acknowledgements: We appreciate the authors for generous sharing their codes and datasets with us for experiments. This work is supported by Natural Science Foundation of China (61872300 and 61873214), Fundamental Research Funds for the Central Universities (XDJK2019B024), Natural Science Foundation of CQ CSTC (cstc2018jcyjAX0228).