Please use this identifier to cite or link to this item: http://hdl.handle.net/10125/60059

DeepCause: Hypothesis Extraction from Information Systems Papers with Deep Learning for Theory Ontology Learning

File Size Format  
0619.pdf 1.11 MB Adobe PDF View/Open

Item Summary

Title:DeepCause: Hypothesis Extraction from Information Systems Papers with Deep Learning for Theory Ontology Learning
Authors:Mueller, Roland
Abdullaev, Sardor
Keywords:Knowing What We Know: Theory, Meta-analysis, and Review
Organizational Systems and Technology
Causal Relation Extraction, Deep Learning, Natural Language Processing, Sequence Labelling, Theory Ontology Learning
Date Issued:08 Jan 2019
Abstract:This paper applies different deep learning architectures for sequence labelling to extract causes, effects, moderators, and mediators from hypotheses of information systems papers for theory ontology learning. We compared a variety of recurrent neural networks (RNN) architectures, like long short-term memory (LSTM), bidirectional LSTM (BiLSTM), simple RNNs, and gated recurrent units (GRU). We analyzed GloVe word embedding, character level vector representation of words, and part-of-speech (POS) tags. Furthermore, we evaluated various hyperparameters and architectures to achieve the highest performance scores. The prototype was evaluated on hypotheses from the AIS basket of eight. The F1 result for the sequence labelling task of causal variables on a chunk level was 80%, with a precision of 80% and a recall of 80%.
Pages/Duration:10 pages
URI:http://hdl.handle.net/10125/60059
ISBN:978-0-9981331-2-6
DOI:10.24251/HICSS.2019.752
Rights:Attribution-NonCommercial-NoDerivatives 4.0 International
https://creativecommons.org/licenses/by-nc-nd/4.0/
Appears in Collections: Knowing What We Know: Theory, Meta-analysis, and Review


Please email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.

This item is licensed under a Creative Commons License Creative Commons