Explainability of multi-modal machine learning and deep learning applications in health
Loading...
Date
Authors
Contributor
Advisor
Editor
Performer
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Interviewee
Narrator
Transcriber
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Journal Name
Volume
Number/Issue
Starting Page
Ending Page
Alternative Title
Abstract
Artificial Intelligence (AI) continues to be developed and has the potential to be incorporated into health, allowing for more efficient and effective diagnosis. However, many AI models operate as ”black-boxes” in how conclusions are drawn or made, leading to a lack of trust. Explainable AI (XAI) has the potential to enable clinicians and patients to understand why a model made the prediction that it made - either for model debugging or for deriving clinically useful insights. This study proposes a pose-inspired framework for autism-based video behavioral analysis, exploring feature influence scores for a Long-Term Short-Term Memory+Neural Network (LSTM+NN) applied to video data. The potential for applicability of multimodal XAI is further shown on publicly available tabular data using variations of a Random Forest model for diabetes diagnosis.
Description
Keywords
Citation
DOI
Extent
94 pages
Format
Type
Thesis
Text
Text
Geographic Location
Time Period
Related To
Related To (URI)
Table of Contents
Rights
All UHM dissertations and theses are protected by copyright. They may be viewed from this source for any purpose, but reproduction or distribution in any format is prohibited without written permission from the copyright owner.
Rights Holder
Catalog Record
Local Contexts
Collections
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.
