Generative and Conversational AI in Information Systems Research and Education: Opportunities and Challenges

Permanent URI for this collectionhttps://hdl.handle.net/10125/107576

Browse

Recent Submissions

Now showing 1 - 7 of 7
  • Item type: Item ,
    Generative AI for Systems Thinking: Can a GPT Question-Answering System Turn Text into the Causal Maps Produced by Human Readers?
    (2024-01-03) Giabbanelli, Philippe; Witkowicz, Nathan
    Representing a system as a network is critical to support systems thinking, hence several tools have been developed to derive networks from text in educational technology, modeling and simulation, or forecasting. Large-Scale Pre-Trained Language Models (PLMs) have recently come to the forefront to create question-answering systems (Q&A) that can extract networks from text. In this paper, we design and implement a Q&A system that uses GPT-3.5 together with 12 filters to extract causal maps text. Our evaluation on two topics via several policy documents finds that GPT accurately extracts relevant concept nodes but occasionally reverses causal directions and struggles with the type of causality as it lacks an understanding of event sequence. We also show that automatically extracted maps can only partially resemble human-made maps collected on the same topics. By making our Q&A system open-source on a permanent repository, researchers can evaluate it with newer PLMs as technology improves.
  • Item type: Item ,
    Narrating Causal Graphs with Large Language Models
    (2024-01-03) Giabbanelli, Philippe; Phatak, Atharva; Mago, Vijay; Agrawal , Ameeta
    The use of generative AI to create text descriptions from graphs has mostly focused on knowledge graphs, which connect concepts using facts. In this work we explore the capability of large pretrained language models to generate text from causal graphs, where salient concepts are represented as nodes and causality is represented via directed, typed edges. The causal reasoning encoded in these graphs can support applications as diverse as healthcare or marketing. Using two publicly available causal graph datasets, we empirically investigate the performance of four GPT-3 models under various settings. Our results indicate that while causal text descriptions improve with training data, compared to fact-based graphs, they are harder to generate under zero-shot settings. Results further suggest that users of generative AI can deploy future applications faster since similar performances are obtained when training a model with only a few examples as compared to fine-tuning via a large curated dataset.
  • Item type: Item ,
    The More Is Not the Merrier: Effects of Prompt Engineering on the Quality of Ideas Generated By GPT-3
    (2024-01-03) Memmert, Lucas; Cvetkovic, Izabel; Bittner, Eva
    Generative language models (GLM) like GPT-3 can support humans in creative tasks. Such systems are capable of generating free-text output based on a provided input prompt. Given the outputs’ sensitivity to the prompt, many techniques for prompt engineering were proposed both anecdotally in social media and increasingly in literature. It is, however, unclear if and how such a system and such techniques can be employed in creative contexts such as for generating ideas. In our study, we investigate the effects of using six prompt engineering techniques. For each combination of techniques, we have GPT-3 generate ideas for an exemplary scenario. The ideas are rated according to novelty and value. We report on the effects of the (combinations of) prompt engineering techniques. With our study, we contribute to the emerging field of prompt engineering and shed light on supporting idea generation with GLMs, showing a pathway to embedded GLM capabilities.
  • Item type: Item ,
    Assignments in the ChatGPT-era: Case Study on Plagiarism in Digital Systems Design Courses
    (2024-01-03) Shallari, Irida; Hussain, Mazhar
    We are experiencing a prolific growth of Artificial Intelligence (AI) that is enabling its ubiquitous diffusion. As part of it, generative AI models have gained particular attention due to their promising capabilities in solving complex tasks previously associated solely to human cognitive capabilities. In this article we focus on a specific AI tool, ChatGPT, which has been developed with the vision of behaving as an educational tool tailored to everyone's learning needs. This case study analyses the capabilities of such a tool in solving a predefined set of tasks in the subject area of Digital Systems Design, with the scope of designing robust assignments for students that cannot be solved and plagiarised with this tool. The results observed across different categories of cognitive depth show that ChatGPT has extensive conceptual knowledge in the area. However, this tool has important limitations when it comes to optimisation tasks, device specific configurations and overlaying of concepts, putting an emphasis on the importance of using such aspects in the design of robust tasks.
  • Item type: Item ,
    Student Interaction with Generative AI: An Exploration of an Emergent Information-Search Process
    (2024-01-03) Schuetzler, Ryan; Giboney, Justin; Wells, Taylor; Richardson, Benjamin; Meservy, Tom; Sutton, Cole; Posey, Clay; Steffen, Jacob; Hughes, Amanda
    ChatGPT, a generative artificial intelligence, is one of the fastest-adopted tools in history and has quickly become a valued tool in education. This study seeks to understand how generative artificial intelligence has changed the information search process. We collected prompts submitted to ChatGPT and thoughts about ChatGPT responses through a survey of 455 students at a US university. Using thematic analysis, we identified ways that ChatGPT changes the information search process of students by supporting diverse information needs, allowing cycling of prompt adjustments, and promoting easy adoption of results.
  • Item type: Item ,
    Investigating the Relative Impact of Generative AI vs. Humans on Voluntary Knowledge Contributions
    (2024-01-03) Shan, Guohou; Pienta, Dan; Thatcher, Jason Bennet
    Voluntary knowledge contributions on question and answer (Q&A) platforms are important for users, platforms and organizations. Generative Artificial Intelligence (GAI) techniques have made it possible to automatically generate voluntary knowledge on Q&A platforms. The relative impact of GAI vs. humans on users' voluntary contribution of knowledge to Q&A platforms has yet to be explored. On the one hand, GAI can generate highly accurate answers because it is trained on large volumes of diverse, high-quality data. On the other hand, GAI can produce incorrect answers and fabricated facts. Using data from one of the largest Q&A platforms, Stack Overflow, we apply fixed effects models to understand the relative impact of GAI vs. human contributors on answer quality. We find that, on average, GAI answers receive lower scores and are shorter, but they can also be easier to read, more positive, and more objective. Our study has both theoretical and practical implications.
  • Item type: Item ,