Detection of Important States through an Iterative Q-value Algorithm for Explainable Reinforcement Learning
Files
Date
2024-01-03
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Narrator
Transcriber
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
1401
Ending Page
Alternative Title
Abstract
To generate safe and trustworthy Reinforcement Learning agents, it is fundamental to recognize meaningful states where a particular action should be performed. Thus, it is possible to produce more accurate explanations of the behaviour of the trained agent and simultaneously reduce the risk of committing a fatal error. In this study, we improve existing metrics using Q-values to detect essential states in Reinforcement Learning by introducing a scaled iterated algorithm called IQVA. The key observation of our approach is that a state is important not only if the action has a high impact but also if it often appears in different episodes. We compared our approach with the two baseline measures and a newly introduced value in grid-world environments to demonstrate its efficacy. In this way, we show how the proposed methodology can highlight only the meaningful states for that particular agent instead of emphasizing the importance of states that are rarely visited.
Description
Keywords
Intelligent Decision Support on Networks – Data-driven Optimization, Augmented and Explainable AI in Complex Supply Chains, explainable reinforcement learning, importance analysis, important states, safe reinforcement learning
Citation
Extent
8 pages
Format
Geographic Location
Time Period
Related To
Proceedings of the 57th Hawaii International Conference on System Sciences
Related To (URI)
Table of Contents
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights Holder
Local Contexts
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.