Advances in Trust Research: How Context and Digital Technologies Matter

Permanent URI for this collection


Recent Submissions

Now showing 1 - 5 of 5
  • Item
    Trust and Vulnerability in the Cybersecurity Context
    ( 2023-01-03) Rosalind Searle, Rosalind ; Renaud, Karen
    Cybersecurity attacks offer a pertinent context within which to examine the currently under-explored dynamics of trust and distrust and their consequences for organisations and their employees. Such attacks wreak havoc in organisations, costly both in financial and reputational terms. We outline the dynamic, multi-level processes following initial attack making salient vulnerability to an unseen exploiter, and then making germane trust and trustworthiness in a number of relationships in an employing organisation. Drawing from events theory, we devise a multi-level conceptual trust model of the interrelations between the emotional, cognitive and social processes an attack can produce, to distinguish two paths revealing their markedly different durations, magnitudes, and level of relational and threat consequences. We explore these as dynamic experiences of trust to elucidate vulnerability and its experience and management for the targeted individual, with key organisational actors. This leads to the formation of an anchoring event that creates enduring changes to multiple relationships within the organisation with consequences for individual, team and organisational resilience and risk.
  • Item
    Introduction to the Minitrack on Advances in Trust Research: How Context and Digital Technologies Matter
    ( 2023-01-03) Möhlmann, Mareike ; Jarvenpaa, Sirkka ; Alarcon, Gene ; Blomqvist, Kirsimarja
  • Item
    How Certain Robot Attributes Influence Human-to-Robot Social and Emotional Bonds
    ( 2023-01-03) Fife, Paul ; Rosengren, Warren ; Gaskin, James
    A growing population of humans are feeling lonely and isolated and may therefore benefit from social and emotional companionship. However, other humans cannot always be available to fulfill these needs, and such in-need individuals often cannot care for pets. Therefore, we explore how robot companions may be designed to facilitate bonds with humans. Our preliminary examination of 115 participants in a quasi-experimental study suggests that humans are more likely to develop social and emotional bonds with robots when those robots are good at communicating and conveying emotions. However, robots’ anthropomorphic attributes and responsiveness to external cues were found to have no impact on bond formulation.
  • Item
    “Is This Even Relevant?” Investigating the Relevance of Antecedents to Trust in Ad Hoc Dyads
    ( 2023-01-03) Capiola, August ; Fox, Elizabeth ; Stephenson, Arielle ; Hamdan, Izz Aldin
    Trust is an important variable for effective ad hoc collaboration. As ad hoc collaborations become more prevalent, researchers and stakeholders will need to identify what features facilitate rapid and appropriate swift trust, particularly in contexts comprising salient risk and high stakes. The present work investigated the relevance and impact of antecedents to swift trust in ad hoc dyads. In a within-subjects experiment, we leveraged a vignette and assessed what antecedents were relevant and affected trust in ad hoc dyad formation. The results showed that antecedents varied in terms of their relevance and effect on trust. We discuss how these results align with extant research and implications for future research investigating swift trust in ad hoc collaborations.
  • Item
    Artificial Intelligence-Driven Convergence and its Moderating Effect on Multi-Source Trust Transfer
    ( 2023-01-03) Renner, Maximilian ; Lins, Sebastian ; Söllner, Matthias ; Jarvenpaa, Sirkka ; Sunyaev, Ali
    AI-driven convergence describes how innovative products emerge from the interplay of embedded artificial intelligence (AI) in existing technologies. Trust transfer theory provides an excellent opportunity to deepen prevailing discussions about trust in such converged products. However, AI-driven convergence challenges existing theoretical assumptions. The context-specific interplay of multiple trust sources may affect users’ trust transfer and the predominance of trust sources. We contextualized AI-driven convergence and investigated its impact on multi-source trust transfer. We conducted semi-structured interviews with 25 participants in the context of autonomous vehicles. Our results indicate that users’ perceived trust source control, perceived trust source accessibility, and perceived trust source value creation share may moderate users’ trust transfer. We contribute to research by contextualizing convergence in AI, revealing the impact of AI-driven convergence on trust transfer and the importance of trust as a dynamic construct.