Expert-quality Dataset Labeling via Gamified Crowdsourcing on Point-of-Care Lung Ultrasound Data

Date
2024-01-03
Authors
Duggan, Nicole M.
Jin, Mike
Duhaime, Erik
Kapur, Tina
Duran Mendicuti, Maria Alejandra
Hallisey, Stephen
Bernier, Denie
Selame , Lauren
Asgari-Targhi, Ameneh
Fischetti, Chanel
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
3891
Ending Page
Alternative Title
Abstract
data interpretation. Building such tools requires labeled training datasets. We tested whether a gamified crowdsourcing approach can produce clinical expert-quality lung ultrasound clip labels. 2,384 lung ultrasound clips were retrospectively collected. Six lung ultrasound experts classified 393 of these clips as having no B-lines, one or more discrete B-lines, or confluent B-lines to create two sets of reference standard labels: a training and test set. Sets trained users on a gamified crowdsourcing platform, and compared concordance of the resulting crowd labels to the concordance of individual experts to reference standards, respectively. 99,238 crowdsourced opinions were collected from 426 unique users over 8 days. Mean labeling concordance of individual experts relative to the reference standard was 85.0% ± 2.0 (SEM), compared to 87.9% crowdsourced label concordance (p=0.15). Scalable, high-quality labeling approaches such as crowdsourcing may streamline training dataset creation for machine learning model development.
Description
Keywords
Technology, Machine Learning, and Bias in Emergency Care, artificial intelligence, crowdsourcing, machine learning, pocus, ultrasound
Citation
Extent
7 pages
Format
Geographic Location
Time Period
Related To
Proceedings of the 57th Hawaii International Conference on System Sciences
Table of Contents
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights Holder
Local Contexts
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.