Combining Self-reported Confidences from Uncertain Annotators to Improve Label Quality

The content on this page was translated automatically.

C. Sandrock, M. Herde, A. Calma, D. Kottke and B. Sick

2019 International Joint Conference on Neural Networks (IJCNN), 2019, pp. 1-8,
doi: 10.1109/IJCNN.2019.8852456.

 

Class assignment (label) is not the only information that can be queried from annotators. Additional feedback, in the form of confidences, quantifies the reliability of the provided label. Multiple annotators classifying the same sample with different levels of expertise posses different levels of confidences. In this article, we discuss different types of confidence inspired by real-world application and discuss how they relate to each other. The α-confidence is inspired by humans, the β-confidence works with probabilistic outputs from intelligent machines (e.g., robots), and γ-confidence models non-normalized confidences. We consider uncertain, benevolent annotators, thus the provided labels can be contradictory. To overcome this problem, we propose two fusion strategies that combine the confidences from multiple uncertain annotators to a single one. Numerical and graphical evaluation indicates superior performance of our strategy compared to related strategies, namely, confidence weighted majority vote, c-Certainty, and a maximum likelihood estimation.

 

Supplementary Material
Paper