Deep Reinforcement Learning for Supply Chain Synchronization

dc.contributor.author Jackson, Ilya
dc.date.accessioned 2021-12-24T17:35:13Z
dc.date.available 2021-12-24T17:35:13Z
dc.date.issued 2022-01-04
dc.description.abstract Supply chain synchronization can prevent the “bullwhip effect” and significantly mitigate ripple effects caused by operational failures. This paper demonstrates how deep reinforcement learning agents based on the proximal policy optimization algorithm can synchronize inbound and outbound flows if end-toend visibility is provided. The paper concludes that the proposed solution has the potential to perform adaptive control in complex supply chains. Furthermore, the proposed approach is general, task unspecific, and adaptive in the sense that prior knowledge about the system is not required.
dc.format.extent 9 pages
dc.identifier.doi 10.24251/HICSS.2022.246
dc.identifier.isbn 978-0-9981331-5-7
dc.identifier.uri http://hdl.handle.net/10125/79578
dc.language.iso eng
dc.relation.ispartof Proceedings of the 55th Hawaii International Conference on System Sciences
dc.rights Attribution-NonCommercial-NoDerivatives 4.0 International
dc.rights.uri https://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subject Simulation Modeling and Digital Twins for Decision Making in the Age of Industry 4.0
dc.subject deep rl
dc.subject ppo
dc.subject reinforcement learning
dc.subject supply chain
dc.title Deep Reinforcement Learning for Supply Chain Synchronization
dc.type.dcmi text
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
0195.pdf
Size:
990 KB
Format:
Adobe Portable Document Format
Description: