Crowd-enhanced Technologies for Improving Reasoning and Solving Complex Problems

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 5 of 5
  • Item
    TRACE: A Stigmergic Crowdsourcing Platform for Intelligence Analysis
    ( 2019-01-08) Xia, Huichuan ; Østerlund, Carsten ; McKernan, Brian ; Folkestad, James ; Rossini, Patricia ; Boichak, Olga ; Robinson, Jerry ; Kenski, Kate ; Myers, Roc ; Clegg, Benjamin ; Stromer-Galley, Jennifer
    Crowdsourcing has become a frequently adopted approach to solving various tasks from conducting surveys to designing products. In the field of reasoning-support, however, crowdsourcing-related research and application have not been extensively implemented. Reasoning-support is essential in intelligence analysis to help analysts mitigate various cognitive biases, enhance deliberation, and improve report writing. In this paper, we propose a novel approach to designing a crowdsourcing platform that facilitates stigmergic coordination, awareness, and communication for intelligence analysis. We have partly materialized our proposal in the form of a crowdsourcing system which supports intelligence analysis: TRACE (Trackable Reasoning and Analysis for Collaboration and Evaluation). We introduce several stigmergic approaches integrated into TRACE and discuss the potential experimentation of these approaches. We also explain the design implications for further development of TRACE and similar crowdsourcing systems to support reasoning.
  • Item
    Comparing Pineapples with Lilikois: An Experimental Analysis of the Effects of Idea Similarity on Evaluation Performance in Innovation Contests
    ( 2019-01-08) Banken, Victoria ; Seeber, Isabella ; Maier, Ronald
    Identifying promising ideas from large innovation contests is challenging. Evaluators do not perform well when selecting the best ideas from large idea pools as their information processing capabilities are limited. Therefore, it seems reasonable to let crowds evaluate subsets of ideas to distribute efforts among the many. One meaningful approach to subset creation is to draw ideas into subsets according to their similarity. Whether evaluation based on subsets of similar ideas is better than compared to subsets of random ideas is unclear. We employ experimental methods with 66 crowd workers to explore the effects of idea similarity on evaluation performance and cognitive demand. Our study contributes to the understanding of idea selection by providing empirical evidence that crowd workers presented with subsets of similar ideas experience lower cognitive effort and achieve higher elimination accuracy than crowd workers presented with subsets of random ideas. Implications for research and practice are discussed.
  • Item
    Beyond the Medium: Rethinking Information Literacy through Crowdsourced Analysis
    ( 2019-01-08) Boichak, Olga ; Canzonetta, Jordan ; Sitaula, Niraj ; McKernan, Brian ; Taylor, Sarah ; Rossini, Patricia ; Clegg, Benjamin ; Kenski, Kate ; Martey, Rosa ; McCracken, Nancy ; Østerlund, Carsten ; Myers, Roc ; Folkestad, James ; Stromer-Galley, Jennifer
    Information literacy encompasses a range of information evaluation skills for the purpose of making judgments. In the context of crowdsourcing, divergent evaluation criteria might introduce bias into collective judgments. Recent experiments have shown that crowd estimates can be swayed by social influence. This might be an unanticipated effect of media literacy training: encouraging readers to critically evaluate information falls short when their judgment criteria are unclear and vary among social groups. In this exploratory study, we investigate the criteria used by crowd workers in reasoning through a task. We crowdsourced evaluation of a variety of information sources, identifying multiple factors that may affect individual's judgment, as well as the accuracy of aggregated crowd estimates. Using a multi-method approach, we identified relationships between individual information assessment practices and analytical outcomes in crowds, and propose two analytic criteria, relevance and credibility, to optimize collective judgment in complex analytical tasks.
  • Item
    The Design and Development of a Cloud-based Platform Supporting Team-oriented Evidence-based Reasoning: SWARM Systems Paper
    ( 2019-01-08) Sinnott, Richard
    The Smartly-assembled Wiki-style Argument Marshalling project (SWARM) commenced in 2017 as part of the US Intelligence Advanced Research Projects Activity (IARPA) funded Crowdsourcing Evidence, Argumentation, Thinking and Evaluation (CREATE) Program. The SWARM project has developed an online platform allowing groups to produce evidence-based reasoning. This paper provides a summary of the core requirements and rationale that have driven the SWARM platform implementation. We present the technical architecture and associated design implementation. We also introduce core capabilities that have been introduced to encourage user interaction and social acceptance of the platform by the crowds.
  • Item
    Introduction to the Minitrack on Crowd-enhanced Technologies for Improving Reasoning and Solving Complex Problems
    ( 2019-01-08) Folkestad, James ; Østerlund, Carsten ; Kenski, Kate ; Stromer-Galley, Jennifer