Hybrid System Prediction for the Stock Market: The Case of Transitional Markets
Abstract
The subject of this paper is the creation and testing of an enhanced fuzzy neural network backpropagation model for the prediction of stock market indexes, including the comparison with the traditional neural network backpropagation model. The objective of the research is to gather information concerning the possibilities of using the enhanced fuzzy neural network backpropagation model for the prediction of stock market indexes focusing on transitional markets. The methodology used involves the integration of fuzzified weights into the neural network. The research results will be beneficial both for the broader investment community and the academia, in terms of the application of the enhanced model in the investment decision-making, as well as in improving the knowledge in this subject matter.
References
Casasent, D., & Natarajan, S. (1995). A classifier neural net with complex-valued weights and square-law nonlinearities. Neural Networks, 8(6), 989-998.
Cazorla, M., & Escolano, F. (2003). Two bayesian methods for junction classification, Image Processing. IEEE Transactions, 12(3), 317-327.
Cheng, L., & Liu, J. (2014). An Optimized Neural Network Classifier for Automatic Modulator Recognition. TELKOMNIKA Indonesian Journal of Electrical Engineering, 12(2), 1343-1352.
Barbounis, T. G., & Theocharis, J. B. (2007). Locally recurrent neural networks for wind speed prediction using spatial correlation. Information Sciences, 177(24), 5775-5797.
Beale, E. M. L. (1972). A derivation of conjugate gradients, in: F.A. Lootsma (Ed.), Numerical Methods for Nonlinear Optimization, Academic Press, London, 39-43.
De Wilde, O. (1997). The magnitude of the diagonal elements in neural networks. Neural Networks, 10(3), 499-504.
Draghici, S. (2002). On the capabilities of neural networks using limited precision weights. Neural Networks, 15(3), 395-414.
Feuring, T. (1996). Learning in fuzzy neural networks. IEEE International Conference, Neural Networks, 2, 1061-1066.
Fletcher, R., & Reeves, C. M. (1964). Function minimization by conjugate gradients. The Computer Journal, 7, 149-154.
Gedeon, T. (1999). Additive neural networks and periodic patterns. Neural Networks, 12(4–5), 617-626.
Hagan, M. T., Demuth, H. B., & Beale, M. H. (1996). Neural Network Design. PWS Publishing, Boston.
Ishibuchi, H., Tanaka, H., & Okada, H. (1993). Fuzzy neural networks with fuzzy weights and fuzzy biases. IEEE International Conference, Neural Networks 3, 1650-1655.
Ishibuchi, H., Morioka, K., & Tanaka, H. (1994). A fuzzy neural network with trapezoid fuzzy weights. Fuzzy Systems, IEEE World Congress on Computational Intelligence, 1, 228-233.
Islam, Md. M., & Murase, K. (2001). A new algorithm to design compact two-hidden-layer artificial neural networks. Neural Networks, 14(9), 1265-1278.
Kamarthi, S., & Pittner, S. (1999). Accelerating neural network training using weight extrapolations. Neural Networks, 12(9), 1285-1299.
Lin, C. T., & Lee, C. S. G. (1996). Neural Fuzzy Systems. Prentice Hall.
Martinez, G., Melin, P., Bravo, D., Gonzalez, F., & Gonzalez, M. (2006). Modular neural networks and fuzzy sugeno integral for face and fingerprint recognition. Applied Soft Computing Technologies, 34, 603-618.
Meltser, M., Shoham, M., & Manevitz, L. (1996). Approximating functions by neural networks: a constructive solution in the uniform norm. Neural Networks, 9(6), 965-978.
Moller, M. F. (1993). A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks, 6(4), 525-533.
Neville, R. S., & Eldridge, S. (2002). Transformations of sigma–Pi nets: obtaining reflected functions by reflecting weight matrices. Neural Networks, 15(3), 375-393.
Phansalkar, V. V., & Sastry, P. S. (1994). Analysis of the back-propagation algorithm with momentum. IEEE Transactions on Neural Networks, 5(3), 505-506.
Powell, M. J. D. (1977). Restart procedures for the conjugate gradient method. Mathematical Programming, 12(1), 241-254.
Riedmiller, M., & Braun, H. (1993). A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Algorithm, In: Ruspini, H., (Ed.) Proceedings of the ICNN 93, San Francisco, 586-591.
Salazar, P. A., Melin, P., & Castillo, O. (2008). A new biometric recognition technique based on hand geometry and voice using neural networks and fuzzy logic. Soft Computing for Hybrid Intelligent Systems, 171-186.
Yam, J., & Chow, T. (2000). A weight initialization method for improving training speed in feedforward neural network. Neurocomputing, 30(1–4), 219-232.
Yeung, D. S., Chan, P. P., & Ng, W. W. (2009). Radial basis function network learning using localized generalization error bound. Information Sciences, 179(19), 3199-3217.