AI and Future of Work

Permanent URI for this collection


Recent Submissions

Now showing 1 - 9 of 9
  • Item
    To Use or Not to Use Artificial Intelligence? A Framework for the Ideation and Evaluation of Problems to Be Solved with Artificial Intelligence
    ( 2021-01-05) Sturm, Timo ; Fecho, Mariska ; Buxmann, Peter
    The recent advent of artificial intelligence (AI) solutions that surpass humans’ problem-solving capabilities has uncovered AIs’ great potential to act as new type of problem solvers. Despite decades of analysis, research on organizational problem solving has commonly assumed that the problem solver is essentially human. Yet, it remains unclear how existing knowledge on human problem solving translates to a context with problem-solving machines. To take a first step to better understand this novel context, we conducted a qualitative study with 24 experts to explore the process of problem finding that forms the essential first step in problem-solving activities and aims at uncovering reasonable problems to be solved. With our study, we synthesize emerged procedural artifacts and key factors to propose a framework for problem finding in AI solver contexts. Our findings enable future research on human-machine problem solving and offer practitioners helpful guidance on identifying and managing reasonable AI initiatives.
  • Item
    Recorded Work Meetings and Algorithmic Tools: Anticipated Boundary Turbulence
    ( 2021-01-05) Cardon, Peter ; Ma, Haibing ; Fleischmann, A. Carolin ; Aritz, Jolanta
    Meeting recordings and algorithmic tools that process and evaluate recorded meeting data may provide many new opportunities for employees, teams, and organizations. Yet, the use of this data raises important consent, data use, and privacy issues. The purpose of this research is to identify key tensions that should be addressed in organizational policymaking about data use from recorded work meetings. Based on interviews with 50 professionals in the United States, China, and Germany, we identify the following five key tensions (anticipated boundary turbulence) that should be addressed in a social contract approach to organizational policymaking for data use of recorded work meetings: disruption versus help in relationships, privacy versus transparency, employee control versus management control, learning versus evaluation, and trust in AI versus trust in people.
  • Item
    Integration of Artificial Intelligence into Recruiting Young Undergraduates: the Perceptions of 20–23-Year-Old Students
    ( 2021-01-05) Hekkala, Sara ; Hekkala, Riitta
    As applicants that might be subject to artificial intelligence (AI) in recruitment, students aged 20–23 years old were consulted using a qualitative approach employing focus groups. This study found that young undergraduates see AI as the future face of recruitment regardless of its challenges. Our findings are very similar to those of previous studies; however, differences arose regarding how profitable young undergraduates perceived AI and how AI should be used in recruitment. In addition, this study presents a preliminary framework for the integration of AI into recruiting young undergraduates. The framework states that AI is useful in all stages of recruiting, yet to different extents in different phases. AI is most useful in phases where grunt work is involved, and despite the integration of AI, the human touch should still be present in recruiting activities.
  • Item
    Human Decision Making in AI Augmented Systems: Evidence from the Initial Coin Offering Market
    ( 2021-01-05) Basu, Saunak ; Garimella, Aravinda ; Han, Wencui ; Dennis, Alan
    The growing consensus that human intelligence and artificial intelligence are complementary has led to Human-AI hybrid systems. As digital platforms incorporating human-AI hybrids, platform designers need to evaluate the influence of AI on human judgment, and how such hybrid systems perform. In this paper, we investigate: Are human decisions influenced by AI agents in high uncertainty environments, such as evaluating ICO projects? Under what situations are humans able to mitigate AI agents-induced errors? Our results suggest that in general, humans are influenced by AI agents. Humans tend to use AI as a filter to rule out low quality projects, while a high AI rating triggers human expert to apply their own judgment.
  • Item
    Design Foundations for AI Assisted Decision Making: A Self Determination Theory Approach
    ( 2021-01-05) De Vreede, Triparna ; Raghavan, Mukhunth ; De Vreede, Gert-Jan
    Progress of technology and processing power has enabled the advent of sophisticated technology including Artificial Intelligence (AI) agents. AI agents have penetrated society in many forms including conversation agents or chatbots. As these chatbots have a social component to them, is it critical to evaluate the social aspects of their design and its impact on user outcomes. This study employs Social Determination Theory to examine the effect of the three motivational needs on user interaction outcome variables of a decision-making chatbot. Specifically, this study looks at the influence of relatedness, competency, and autonomy on user satisfaction, engagement, decision efficiency, and decision accuracy. A carefully designed experiment revealed that all three needs are important for user satisfaction and engagement while competency and autonomy is associated with decision accuracy. These findings highlight the importance of considering psychological constructs during AI design. Our findings also offer useful implications for AI designers and organizations that plan on using AI assisted chatbots to improve decision-making efforts.
  • Item
    Automation of Routine Work: A Case Study of Employees' Experiences of Work Meaningfulness
    ( 2021-01-05) Staaby, Anne ; Hansen, Kjeld ; Grønli, Tor-Morten
    The idea of automation replacing humans in the workplace has been given considerable attention, while less attention has been afforded to how people develop a meaningful work life with automation of routine work. In this study we investigate how employees, who have had their routine work automated with RPA, have experienced its influence on their work and its meaningfulness. Concretely, we conduct a case study of how employees experience this process in three different case organizations in Oslo, Norway. We make theoretical contributions by combining automation of work, RPA and work meaningfulness literature to understand what opportunities and pitfalls the organizations experience, when they seek to harness value from their human resources in the process. We also contribute with implications for practice, by suggesting organizations to focus on creating autonomy and job crafting opportunities for employees, when they automate routine work with RPA.
  • Item
    Automation and Artificial Intelligence in Software Engineering: Experiences, Challenges, and Opportunities
    ( 2021-01-05) Latinovic, Milan ; Pammer-Schindler, Viktoria
    Automation and Artificial Intelligence have a transformative influence on many sectors, and software engineers are the actors who engineer this transformation. On the other hand, there is little knowledge of how automation and Artificial Intelligence impact software engineering practice. To answer this question, we conducted semi-structured interviews with experienced software practitioners across frontend and backend development, DevOps, R&D, integration, and leadership positions. Our findings reveal 1) automation to appear as micro-automation in the sense of automation of tiny and specific tasks, 2) automation as a side product of work, and bottom-up driven in software engineering, and 3) automation as a possible cause for cognitive overhead due to automatically generated notifications. Furthermore, we notice that our interview participants do not expect automation and artificial intelligence tools to substantially change software engineering's essence in the foreseeable future.
  • Item
    An Empirical Study Exploring Difference in Trust of Perceived Human and Intelligent System Partners
    ( 2021-01-05) Elson, Joel ; Derrick, Douglas ; Merino, Luis
    Intelligent systems are increasingly relied on as partners used to make decisions in business contexts. With advances in artificial intelligence technology and system interfaces, it is increasingly difficult to distinguish these system partners from their human counterparts. Understanding the role of perceived humanness and its impact on trust in these situations is important as trust is widely recognized as critical to system adoption and effective collaboration. We conducted an exploratory study involving individuals collaborating with an intelligent system partner to make several critical decisions. Measured trust levels and survey responses were analyzed. Results suggest that greater trust is experienced when the partner is perceived to be human. Additionally, the attribution of partners possessing expert knowledge drove perceptions of humanness. Partners viewed to adhere to strict syntactical requirements, displaying quick response times, having unnatural conversational tone, and unrealistic availability contributed to perceptions of partners being machine-like.
  • Item
    Introduction to the Minitrack on AI and Future of Work
    ( 2021-01-05) Waizenegger, Lena ; De Vreede, Triparna ; Seeber, Isabella