Computational Mathematical Model for Resource Scheduling in Cloud Computing Environments: Integrating Integer Programming and Reinforcement Learning

Authors

  • Zhiyan Li Handan University
  • Xiaohui Zhang Handan University
  • Baoxia Jin Liuzhou Institute of Technology

DOI:

https://doi.org/10.4108/eetsis.11134

Keywords:

Cloud Computing, Resource Scheduling, Integer Programming, Reinforcement Learning, Predictive Optimization

Abstract

Cloud computing, on the other hand, is very fast and very easy to scale while still having to rely on traditional scheduling methods that cause delays, SLA violations, and a lack of flexibility, mainly when the workloads change. These problems occur in large-scale environments that are very inefficient and hence consume a lot of resources and power. A hybrid solution combining Integer Programming (IP) and Reinforcement Learning (RL) is offered by the authors to make the resource allocation adaptive, energy-efficient, and at the same time provide SLA compliance. The system is supplied with a real Cloud Workload Dataset that offers comprehensive details at both task and system levels. The first step involves Integer Programming, which is used to produce a constraint-aware basic allocation and to ascertain its feasibility by considering not only resource availability but also the limitations imposed by deadlines. At this point, RL takes over from the allocation and thereby improves it further by utilizing Q-learning and ε-greedy policy for making real-time adjustments according to the states of the system and feedback. The interdependence between learning and optimization is thus continuous, leading to better scheduling outcomes over time. The system's performance is evaluated through the application of standard cloud performance metrics such as SLA compliance, resource utilization, task completion time, energy efficiency, and makespan. The hybrid IP–RL framework achieves 98.6% SLA compliance, 99.6% allocation efficiency, and a 12–18% reduction in task completion time when compared to the baseline. Furthermore, it requires 15% less energy and achieves a 92.4% better makespan efficiency. The given results prove that the hybrid method beats the traditional scheduling models in performance concerning all the metrics and opens a door for a cloud resource scheduling that is both scalable and adaptive.

References

[1] Azimi Nasab M., Zand M., Eskandari M., Sanjeevikumar P., and Siano P., “Optimal planning of electrical appliance of residential units in a smart home network using cloud services,” Smart Cities, vol. 4, no. 3, pp. 1173–1195, 2021.

[2] Abedi S., Ghobaei-Arani M., Khorami E., and Mojarad M., “Dynamic resource allocation using improved firefly optimization algorithm in cloud environment,” Applied Artificial Intelligence, vol. 36, no. 1, Art. no. 2055394, 2022.

[3] Mao L., Chen R., Cheng H., Lin W., Liu B., and Wang J. Z., “A resource scheduling method for cloud data centers based on thermal management,” Journal of Cloud Computing, vol. 12, no. 1, p. 84, 2023.

[4] Wang L., “An efficient load prediction-driven scheduling strategy model in container cloud,” International Journal of Intelligent Systems, vol. 2023, no. 1, p. 25, 2023.

[5] Lia G., Amadeo M., Ruggeri G., Campolo C., Molinaro A., and Loscrì V., “In-network placement of delay-constrained computing tasks in a softwarized intelligent edge,” Computer Networks, vol. 219, Art. no.109432, 2022.

[6] Seng J. K. P., Ang K. L., Peter E., and Mmonyi A., “Artificial intelligence and machine learning for multimedia and edge information processing,” Electronics, vol. 11, no. 14, Art. no. 2239, 2022.

[7] Domeke A., Cimoli B., and Monroy I. T., “Integration of network slicing and machine learning into edge networks for low-latency services in 5G and beyond systems,” Applied Sciences, vol. 12, no. 13, Art. no. 6617, 2022.

[8] Huang Y., Mu Z., Wu S., Cui B., and Duan Y., “Revising the observation satellite scheduling problem based on deep reinforcement learning,” Remote Sensing, vol. 13, no. 12, 2377, 2021.

[9] Dalal S., “Next-generation cyber-attack prediction for IoT systems: leveraging multi-class SVM and optimized CHAID decision tree,” Journal of Cloud Computing, vol. 12, no. 1, 137, 2023.

[10] Elfatih N. M., “Internet of vehicle’s resource management in 5G networks using AI technologies: current status and trends,” IET Communications, vol. 16, no. 5, pp. 400–420, 2022.

[11] Kegyes T., Süle Z., and Abonyi J., “The applicability of reinforcement learning methods in the development of Industry 4.0 applications,” Complexity, vol. 2021, no. 1, Art. no.7179374, 2021.

[12] Khezri E., Yahya R. O., Hassanzadeh H., Mohaidat M., Ahmadi S., and Trik M., “DLJSF: Data-locality aware job scheduling IoT tasks in fog-cloud computing environments,” Results in Engineering, vol. 21, Art. no. 101780, 2024.

[13] Mohammad A. and Abbas Y., “Key challenges of cloud computing resource allocation in small and medium enterprises,” Digital, vol. 4, no. 2, pp. 372–388, 2024.

[14] Raeisi-Varzaneh M., Dakkak O., Fazea Y., and Kaosar M. G., “Advanced cost-aware max–min workflow tasks allocation and scheduling in cloud computing systems,” Cluster Computing, vol. 27, no. 9, pp. 13407–13419, 2024.

[15] Palani S. and Rameshbabu K., “A secured energy aware resource allocation and task scheduling based on improved cuckoo search algorithm and deep reinforcement learning for e-healthcare applications,” Measurement: Sensors, vol. 31, Art. no. 100988, 2024.

[16] Fernández-Cerero D., Troyano J. A., Jakóbik A., and Fernández-Montes A., “Machine learning regression to boost scheduling performance in hyper-scale cloud-computing data centres,” Journal of King Saud University – Computer and Information Sciences, vol. 34, no. 6 (Part B), pp. 3191–3203, 2022.

[17] Wang Y., Dong S., and Fan W., “Task scheduling mechanism based on reinforcement learning in cloud computing,” Mathematics, vol. 11, no. 15, Art. no. 3364, 2023.

[18] Pandey N. K., Diwakar M., Shankar A., Singh P., Khosravi M. R., and Kumar V., “Energy efficiency strategy for big data in cloud environment using deep reinforcement learning,” Mobile Information Systems, vol. 2022, no. 1, p. 11, 2022.

[19] Shruthi G., Mundada M. R., Sowmya B. J., and Supreeth S., “Mayfly Taylor optimisation-based scheduling algorithm with deep reinforcement learning for dynamic scheduling in fog-cloud computing,” Applied Computational Intelligence and Soft Computing, vol. 2022, no. 1, p. 7, 2022.

[20] Zheng T., Wan J., Zhang J., and Jiang C., “Deep reinforcement learning-based workload scheduling for edge computing,” Journal of Cloud Computing, vol. 11, no. 1, p. 3, 2022.

[21] Amer A. A., Talkhan I. E., Ahmed R., and Ismail T., “An optimized collaborative scheduling algorithm for prioritized tasks with shared resources in mobile-edge and cloud computing systems,” Mobile Networks and Applications, vol. 27, no. 4, pp. 1444–1460, 2022.

[22] Huang Y., Feng B., Cao Y., Guo Z., Zhang M., and Zheng B., “Collaborative on-demand dynamic deployment via deep reinforcement learning for IoV service in multi-edge clouds,” Journal of Cloud Computing, vol. 12, no. 1, p. 119, 2023.

[23] Wu Z. and Yan D., “Deep reinforcement learning-based computation offloading for 5G vehicle-aware multi-access edge computing network,” China Communications, vol. 18, no. 11, pp. 26–41, 2021.

[24] Bouzidi E. H., Outtagarts A., Langar R., and Boutaba R., “Deep Q-network and traffic prediction-based routing optimization in software defined networks,” Journal of Network and Computer Applications, vol. 192, Art. no.103181, 2021.

[25] Ahmed Q. W., “AI-based resource allocation techniques in wireless sensor Internet of Things networks in energy efficiency with data optimization,” Electronics, vol. 11, no. 13, Art. no.2071, 2022.

[26] Lakhan A., “Delay optimal schemes for Internet of Things applications in heterogeneous edge cloud computing networks,” Sensors, vol. 22, no. 16, Art. no. 5937, 2022.

[27] Fang C., “A DRL-driven intelligent optimization strategy for resource allocation in cloud-edge-end cooperation environments,” Symmetry, vol. 14, no. 10, Art. no.2120, 2022.

[28] G. Zhou, W. Tian, R. Buyya, R. Xue, and L. Song, “Deep reinforcement learning-based methods for resource scheduling in cloud computing: a review and future directions,” Artif. Intell. Rev., vol. 57, no. 5, p. 124, Apr. 2024, doi: 10.1007/s10462-024-10756-9.

[29] W. Zhang and H. Ou, “Reinforcement learning based multi objective task scheduling for energy efficient and cost-effective cloud edge computing,” Sci. Rep., vol. 15, no. 1, p. 41716, Nov. 2025, doi: 10.1038/s41598-025-25666-1.

[30] Alabdullah M. H. and Abido M. A., “Microgrid energy management using deep Q-network reinforcement learning,” Alexandria Engineering Journal, vol. 61, no. 11, pp. 9069–9078, 2022.

[31] Tefera M. K., Zhang S., and Jin Z., “Deep reinforcement learning-assisted optimization for resource allocation in downlink OFDMA cooperative systems,” Entropy, vol. 25, no. 3, Art. no.413, 2023.

[32] Mathur S., Chaba Y., and Noliya A., “Performance analysis of support vector machine learning based carrier aggregation resource scheduling in 5G mobile communication,” Procedia Computer Science, vol. 218, pp. 2776–2785, 2023.

[33] Ali E. S., “Machine learning technologies for secure vehicular communication in Internet of Vehicles: recent advances and applications,” Security and Communication Networks, vol. 2021, no. 1, p. 23, 2021.

[34] Cloud Workload Dataset, Kaggle, 2025. Available: https://www.kaggle.com/datasets/akhilbs/cloud-workload (accessed Oct. 22, 2025).

[35] Kuppusamy P., “Job scheduling problem in fog-cloud-based environment using reinforced social spider optimization,” Journal of Cloud Computing, vol. 11, no. 1, p.99, 2022.

[36] Rossi A., Visentin A., Carraro D., Prestwich S., and Brown K. N., “Forecasting workload in cloud computing: towards uncertainty-aware predictions and transfer learning,” arXiv:2303.13525, 2023.

Downloads

Published

27-03-2026

How to Cite

1.
Li Z, Zhang X, Jin B. Computational Mathematical Model for Resource Scheduling in Cloud Computing Environments: Integrating Integer Programming and Reinforcement Learning. EAI Endorsed Scal Inf Syst [Internet]. 2026 Mar. 27 [cited 2026 Mar. 27];12(8). Available from: https://publications.eai.eu/index.php/sis/article/view/11134