Interpretable Machine Learning

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 4 of 4
  • Item
    Business Inferences and Risk Modeling with Machine Learning; The Case of Aviation Incidents
    ( 2023-01-03) Cankaya, Burak ; Topuz, Kazim ; Glassman, Aaron
    Machine learning becomes truly valuable only when decision-makers begin to depend on it to optimize decisions. Instilling trust in machine learning is critical for businesses in their efforts to interpret and get insights into data, and to make their analytical choices accessible and subject to accountability. In the field of aviation, the innovative application of machine learning and analytics can facilitate an understanding of the risk of accidents and other incidents. These occur infrequently, generally in an irregular, unpredictable manner, and cause significant disruptions, and hence, they are classified as "high-impact, low-probability" (HILP) events. Aviation incident reports are inspected by experts, but it is also important to have a comprehensive overview of incidents and their holistic effects. This study provides an interpretable machine-learning framework for predicting aircraft damage. In addition, it describes patterns of flight specifications detected through the use of a simulation tool and illuminates the underlying reasons for specific aviation accidents. As a result, we can predict the aircraft damage with 85% accuracy and 84% in-class accuracy. Most important, we simulate a combination of possible flight-type, aircraft-type, and pilot-expertise combinations to arrive at insights, and we recommend actions that can be taken by aviation stakeholders, such as airport managers, airlines, flight training companies, and aviation policy makers. In short, we combine predictive results with simulations to interpret findings and prescribe actions.
  • Item
    Introduction to the Minitrack on Interpretable Machine Learning
    ( 2023-01-03) Abdulrashid, Ismail ; Topuz, Kazim ; Bajaj, Akhilesh
  • Item
    Hebbian Continual Representation Learning
    ( 2023-01-03) Morawiecki, Pawel ; Krutsylo, Andrii ; Wołczyk, Maciej ; Śmieja, Marek
    Continual Learning aims to bring machine learning into a more realistic scenario, where tasks are learned sequentially and the i.i.d. assumption is not preserved. Although this setting is natural for biological systems, it proves very difficult for machine learning models such as artificial neural networks. To reduce this performance gap, we investigate the question whether biologically inspired Hebbian learning is useful for tackling continual challenges. In particular, we highlight a realistic and often overlooked unsupervised setting, where the learner has to build representations without any supervision. By combining sparse neural networks with Hebbian learning principle, we build a simple yet effective alternative (HebbCL) to typical neural network models trained via the gradient descent. Due to Hebbian learning, the network have easily interpretable weights, which might be essential in critical application such as security or healthcare. We demonstrate the efficacy of HebbCL in an unsupervised learning setting applied to MNIST and Omniglot datasets. We also adapt the algorithm to the supervised scenario and obtain promising results in the class-incremental learning.
  • Item
    Bayesian Networks for Interpretable Cyberattack Detection
    ( 2023-01-03) Yang, Barnett ; Hoffman, Matt ; Brown, Nathanael
    The challenge of cyberattack detection can be illustrated by the complexity of the MITRE ATT&CKTM matrix, which catalogues >200 attack techniques (most with multiple sub-techniques). To reliably detect cyberattacks, we propose an evidence-based approach which fuses multiple cyber events over varying time periods to help differentiate normal from malicious behavior. We use Bayesian Networks (BNs) – probabilistic graphical models consisting of a set of variables and their conditional dependencies – for fusion/classification due to their interpretable nature, ability to tolerate sparse or imbalanced data, and resistance to overfitting. Our technique utilizes a small collection of expert-informed cyber intrusion indicators to create a hybrid detection system that combines data-driven training with expert knowledge to form a host-based intrusion detection system (HIDS). We demonstrate a software pipeline for efficiently generating and evaluating various BN classifier architectures for specific datasets and discuss explainability benefits thereof.