End-to-End Latency Prediction of Microservices Workflow on Kubernetes: A Comparative Evaluation of Machine Learning Models and Resource Metrics

Mohamed, Haytham
El-Gayar, Omar
Journal Title
Journal ISSN
Volume Title
Application design has been revolutionized with the adoption of microservices architecture. The ability to estimate end-to-end response latency would help software practitioners to design and operate microservices applications reliably and with efficient resource capacity. The objective of this research is to examine and compare data-driven approaches and a variety of resource metrics to predict end-to-end response latency of a containerized microservices workflow running in a cloud Kubernetes platform. We implemented and evaluated the prediction using a deep neural network and various machine learning techniques while investigating the selection of resource utilization metrics. Observed characteristics and performance metrics from both microservices and platform levels were used as prediction indicators. To compare performance models, we experimented with a benchmarking open-source Sock Shop containerized application. A deep neural network technique exhibited the best prediction accuracy using all metrics, while other machine learning techniques demonstrated acceptable performance using a subset of the metrics.
Practitioner Research Insights: Applications of Science and Technology in Work, kubernetes, latency, machine learning, metrics, microservices
Access Rights
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.