An Approach to Optimization of Gated Recurrent Unit with Greedy Algorithm

  • Patricio Ignacio Lazcano Muñoz SRH University Heidelberg, Student
Keywords: Stacked Gated Recurrent Unit, Greedy Algorithm, Time Series Prediction, Hyperparameter Optimization, Stock Price Forecasting.


This study focuses on enhancing the performance of Stacked Gated Recurrent Unit (GRU) model in time series data processing, specifically in stock price prediction. The most significant innovation occurs in the integration of a Greedy Algorithm for optimizing hyperparameters such as look-back period, number of epochs, batch size, and units in each GRU layer. Historical stock data from Apple Inc. is utilized for the model's application, emphasizing the model's effectiveness in predicting stock prices. The study methodology involves a sequence of steps, such as data loading, preprocessing, dataset splitting, model construction and evaluation. The role of the Greedy Algorithm's focuses in iteratively adjusting hyperparameters to minimize the Root Mean Squared Error (RMSE) metric, resulting in refining the model's predictive accuracy. The outcomes reveal that the integrated Greedy Algorithm significantly enhances the model's accuracy in predicting stock prices, indicating its potential application in various scenarios requiring precise time series forecasting.


[1] Z. Johnson, “Exploring Gated Recurrent Unit: A Powerful Neural Network Approach”, PicDataset, July 23, 2023. [Online]. Available: [Accessed January 10, 2024].
[2] S. Zivkovic, “RNN_Tackling Vanishing Gradients with GRU and LSTM”, DataHacker, September 24, 2020. [Online]. Available: [Accessed January 10, 2024].
[3] Data Science Team, “Gated Recurrent Units – Understanding the Fundamentals”, DataScience, June 02, 2022. [Online]. Available: [Accessed January 10, 2024].
[4] F. Pan, J. Li, B. Tan, C. Zeng, X. Jiang, L. Liu, and J. Yang, “Stacked-GRU Based Power System Transient Stability Assessment Method”, MDPI, August 09, 2018. [Online]. Available: [Accessed January 10, 2024].
[5] V. Rai, “Recurrent Neural Networks (RNNs): Understanding Sequential Data Processing”, InterviewKickstart, December 27, 2023. [Online]. Available: [Accessed January 10, 2024].
[6] Baeldung, “Prevent the Vanishing Gradient”, Baeldung, January 11, 2024. [Online]. Available: [Accessed January 10, 2024].
[7] Aishwarya, “Introduction to Recurrent Neural Network”, GeeksforGeeks, December 04, 2023. [Online]. Available: [Accessed January 10, 2024].
[8] S. Kostadinov, “Understanding GRU Networks”, Towards Data Science, December 16, 2017. [Online]. Available: [Accessed January 10, 2024].
[9] S. Selvidge, “Min-Max Data Normalization in Python: Best Practices”, CeleryQ, September 20, 2023. [Online]. Available: [Accessed January 10, 2024].
[10] T. B. Shahi, A. Shrestha, A. Neupane, and W. Guo, “Stock Price Forecasting with Deep Learning: A Comparative Study”, MDPI, August 27, 2020. [Online]. Available: [Accessed January 10, 2024].
[11] Y. Tang, “Build a GRU RNN in Keras”, PythonAlgos, January 02, 2022. [Online]. Available: [Accessed January 10, 2024].
[12] J. Brownlee, “Time Series Prediction with LSTM Recurrent Neural Networks in Python with Keras”, Machine Learning Mastery, August 07, 2022. [Online]. Available: [Accessed January 10, 2024].