Market Predictability - page 12

 

This paper shows how the performance of the basic Local Linear Wavelet Neural Network model (LLWNN) can be improved with hybridizing it with fuzzy model. The new improved LLWNN based Neurofuzzy hybrid model is used to predict two currency exchange rates i.e. the U.S. Dollar to the Indian Rupee and the U.S. Dollar to the Japanese Yen. The forecasting of foreign exchange rates is done on different time horizons for 1 day, 1 week and 1 month ahead. The LLWNN and Neurofuzzy hybrid models are trained with the backpropagation training algorithm. The two performance measurers i.e. the Root Mean Square Error (RMSE) and the Mean Absolute Percentage Error (MAPE) show the superiority of the Neurofuzzy hybrid model over the LLWNN model.




 

Predicting trends in stock market prices has been an area of interest for researchers for many years due to its complex and dynamic nature. Intrinsic volatility in stock market across the globe makes the task of prediction challenging. Forecasting and diffusion modeling, although effective can't be the panacea to the diverse range of problems encountered in prediction, short-term or otherwise. Market risk, strongly correlated with forecasting errors, needs to be minimized to ensure minimal risk in investment. The authors propose to minimize forecasting error by treating the forecasting problem as a classification problem, a popular suite of algorithms in Machine learning. In this paper, we propose a novel way to minimize the risk of investment in stock market by predicting the returns of a stock using a class of powerful machine learning algorithms known as ensemble learning. Some of the technical indicators such as Relative Strength Index (RSI), stochastic oscillator etc are used as inputs to train our model. The learning model used is an ensemble of multiple decision trees. The algorithm is shown to outperform existing algo- rithms found in the literature. Out of Bag (OOB) error estimates have been found to be encouraging. Key Words: Random Forest Classifier, stock price forecasting, Exponential smoothing, feature extraction, OOB error and convergence.



 

Financial trading is one of the most common risk investment actions in the modern economic environment because financial market systems are complex non-linear dynamic systems. It is a challenge to develop the inherent rules using the traditional time series prediction technique. In this paper, we proposed a new forecasting method based on multi-order fuzzy time series, technical analysis, and a genetic algorithm. Multi-order fuzzy time series (first-order, second-order and third-order) are applied in the proposed algorithm, and to improve the performance, genetic algorithm is used to find a good domain partition. Technical analysis such as the Rate of Change (ROC), Moving Average Convergence/Divergence (MACD), and Stochastic Oscillator (KDJ) are introduced to construct multi-variable fuzzy time series, and exponential smoothing is used to eliminate noise in the time series. In addition to the root mean square error and mean square error, the directional accuracy rate (DAR) is also used in our empirical studies. We apply the proposed method to forecast five well-known stock indexes and the NTD/USD exchange rates. Experimental results demonstrate that our proposed method outperforms other existing models based on fuzzy time series.



 
Forecasting-volatility models typically rely on either daily or high frequency (HF) data and the choice between these two categories is not obvious. In particular, the latter allows to treat volatility as observable but they suffer of many limitations. HF data feature microstructure problem, such as the discreteness of the data, the properties of the trading mechanism and the existence of bid-ask spread. Moreover, these data are not always available and, even if they are, the asset’s liquidity may be not sufficient to allow for frequent transactions. This paper considers different variants of these two family forecasting-volatility models, comparing their performance (in terms of Value at Risk, VaR) under the assumptions of jumping prices and leverage effects for volatility. Findings suggest that GARJI model provides more accurate VaR measures for the S&P 500 index than RV models. Furthermore, the assumption of conditional normality is shown to be not sufficient to obtain accurate risk measures even if jump contribution is provided. More sophisticated models might address this issue, improving VaR results.
 

Foreign exchange (ForEx) rates are amongst the most important economic indices in the international monetary markets. ForEx rate represents the value of one currency in another and it fluctuates over time. It is related to indicators like inflation, interest rate, gross domestic product, and so forth. In a series of works, we investigated and confirmed the chaotic property of ForEx rates by finding positive largest Lyapunov exponent (LLE). As inflation influences ForEx, in this work we would like to address the specific question, Is inflation data also chaotic? We collected data for time period of 2000 to 2013 and tested for nonlinearity in data by surrogate method. Calculating LLE, we find existence of chaos in inflation data for some countries.



 

The crisis in financial markets has contributed to a significant downturn in the global economy. Significant perturbations and high volatility have also been observed in the forex market. The purpose of this article is to determine which variables from the financial and commodity market affect the exchange rate changes. The article presents the concept of FTS analysis (Fundamental – Technical – Speculative) used to predict the direction of exchange rate changes. The author’s FTS concept is a combination of two most popular forecasting methods and an attempt to add (parameterize) information on financial institutions’ interventions in the forex market and generally understood speculations. The paper specifies explanatory variables and determines the nature of the occurring interdependencies. The FTS analysis allows for developing a computer system for decision support in the management of foreign exchange risk. At the same time, the paper draws attention to the complexity of the issue and shows possible directions for further research.


 
In the last few decades many methods have become available for forecasting. As always, when alternatives exist, choices need to be made so that an appropriate forecasting method can be selected and used for the specific situation being considered. This paper reports the results of a forecasting competition that provides information to facilitate such choice. Seven experts in each of the 24 methods forecasted up to 1001 series for six up to eighteen time horizons. The results of the competition are presented in this paper whose purpose is to provide empirical evidence about differences found to exist among the various extrapolative (time series) methods used in the competition.
 
The common way to measure the performance of a volatility prediction model is to assess its ability to predict future volatility. However, as volatility is unobservable, there is no natural metric for measuring the accuracy of any particular model. Noh et al. (1994) assessed the performance of a volatility prediction model by devising trading rules to trade options on a daily basis and using forecasts of option prices obtained by the Black & Scholes (BS) option pricing formula. (An option is a security that gives its owner the right, not the obligation, to buy or sell an asset at a fixed price within a specified period of time, subject to certain conditions. The BS formula amounts to buying (selling) an option when its price forecast for tomorrow is higher (lower) than today’s market settlement price.)

In this paper, adopting Noh et al.’s (1994) idea, we assess the performance of a number of Autoregressive Conditional Heteroscedasticity (ARCH) models. For, each trading day, the ARCH model, selected on the basis of the prediction error criterion (PEC) introduced by Xekalaki et al. (2003) and suggested by Degiannakis and Xekalaki (1999) in the context of ARCH models, is used to forecast volatility. According to this criterion, the ARCH model with the lowest sum of squared standardized one step ahead prediction errors is selected for forecasting future volatility. A comparative study is made in order to examine which ARCH volatility estimation method yields the highest profits and whether there is any gain in using the PEC model selection algorithm for speculating with financial derivatives. Among a set of model selection algorithms, even marginally, the PEC algorithm appears to achieve the highest rate of return.
 
We present a volatility forecasting comparative study within the ARCH class of models. Our goal is to identify successful predictive models over multiple horizons and to investigate how predictive ability is influenced by choices for estimation window length, innovation distribution, and frequency of parameter re-estimation. Test assets include a range of domestic and international equity indices and exchange rates. We find that model rankings are insensitive to forecast horizon and suggestions for estimation best practices emerge. While our main sample spans 1990-2008, we take advantage of the near-record surge in volatility during the last half of 2008 to ask if forecasting models or best practices break down during periods of turmoil. Surprisingly, we find that volatility during the 2008 crisis was well approximated by predictions one-day ahead, and should have been within risk managers' 1% confidence intervals up to one month ahead.
 
Cash management models determine policies based either on the statistical properties of daily cash flow or on forecasts. Usual assumptions on the statistical properties of daily cash flow include normality, independence and stationarity. Surprisingly, little empirical evidence confirming these assumptions has been provided. In this work, we provide a comprehensive study on 54 real-world daily cash flow data sets, which we also make publicly available. Apart from the previous assumptions, we also consider linearity, meaning that cash flow is proportional to a particular explanatory variable, and we propose a new cross-validated test for time series non-linearity. We further analyze the implications of all aforementioned assumptions for forecasting, showing that: (i) the usual assumption of normality, independence and stationarity hardly appear; (ii) non-linearity is often relevant for forecasting; and (iii) common data transformations such as outlier treatment and Box-Cox have little impact on linearity and normality. Our results highlight the utility of non-linear models as a justifiable alternative for time series forecasting.
Reason: