Neural Networks - page 23

 

In this paper, a novel decision support system using a computational efficient functional link artificial neural network (CEFLANN) and a set of rules is proposed to generate the trading decisions more effectively. Here the problem of stock trading decision prediction is articulated as a classification problem with three class values representing the buy, hold and sell signals. The CEFLANN network used in the decision support system produces a set of continuous trading signals within the range 0 to 1 by analyzing the nonlinear relationship exists between few popular technical indicators. Further the output trading signals are used to track the trend and to produce the trading decision based on that trend using some trading rules. The novelty of the approach is to engender the profitable stock trading decision points through integration of the learning ability of CEFLANN neural network with the technical analysis rules. For assessing the potential use of the proposed method, the model performance is also compared with some other machine learning techniques such as Support Vector Machine (SVM), Naive Bayesian model, K nearest neighbor model (KNN) and Decision Tree (DT) model.



 

The motivation behind this research is to innovatively combine new methods like wavelet, principal component analysis (PCA), and artificial neural network (ANN) approaches to analyze trade in today’s increasingly difficult and volatile financial futures markets. The main focus of this study is to facilitate forecasting by using an enhanced denoising process on market data, taken as a multivariate signal, in order to deduct the same noise from the open-high-low-close signal of a market. This research offers evidence on the predictive ability and the profitability of abnormal returns of a new hybrid forecasting model using Wavelet-PCA denoising and ANN (named WPCA-NN) on futures contracts of Hong Kong’s Hang Seng futures, Japan’s NIKKEI 225 futures, Singapore’s MSCI futures, South Korea’s KOSPI 200 futures, and Taiwan’s TAIEX futures from 2005 to 2014. Using a host of technical analysis indicators consisting of RSI, MACD, MACD Signal, Stochastic Fast %K, Stochastic Slow %K, Stochastic %D, and Ultimate Oscillator, empirical results show that the annual mean returns of WPCA-NN are more than the threshold buy-and-hold for the validation, test, and evaluation periods; this is inconsistent with the traditional random walk hypothesis, which insists that mechanical rules cannot outperform the threshold buy-and-hold. The findings, however, are consistent with literature that advocates technical analysis.


 
Currency exchange is the trading of one currency against another. FOREX rates are influenced by many correlated economic, political and psychological factors and hence predicting it is an uphill task. Some methods to predict the FOREX rate include statistical analysis, time series analysis, fuzzy systems, neural networks, and hybrid systems. These methods suffer from the problem of accurately predicting the exchange. A Artificial Neural Network (ANN) and a hybrid Neuro-Fuzzy system (ANFIS) are proposed to predict the future rate of the FOREX market. The MLP is used to predict the rise or fall in the exchange rate while the ANFIS model is used to predict the exchange rate for the next day. For the experiment, USDINR exchange rate from the forex market is used. Mean Square Error (MSE) and Mean Absolute Error (MAE) are used as performance indicators. The ANN achieved an MSE of 0.033 and MAE of 0.0002 during training while the ANFIS model achieved an MSE of 0.024 and a MAE of 6.7x10-8. The ANN achieved an MSE of 0.003 and MAE of 0.00082 while the ANFIS model achieved an MSE of 0.02 and a MAE of 0.00792 during the testing phase.
 
The marketing literature so far only considers attraction models with strict functional forms. Greater exibility can be achieved by the neural net based approach introduced which assesses brands' attraction values by means of a perceptron with one hidden layer. Using log-ratio transformed market shares as dependent variables stochastic gradient descent followed by a quasi-Newton method estimates parameters. For store-level data the neural net model performs better and implies a price response qualitatively different from the well-known MNL attraction model. Price elasticities of these competing models also lead to specific managerial implications. (author's abstract)
 

This paper propose that the combination of smoothing approach taking into account the entropic information provided by Renyi method, has an acceptable performance in term of forecasting errors. The methodology of the proposed scheme is examined through benchmark chaotic time series, such as Mackay Glass, Lorenz, Henon maps, the Lynx and rainfall from Santa Francisca series, with addition of white noise by using neural networks-based energy associated (EAS) predictor filter modified by Renyi entropy of the series. In particular, when the time series is short or long, the underlying dynamical system is nonlinear and temporal dependencies span long time intervals, in which this are also called long memory process. In such cases, the inherent nonlinearity of neural networks models and a higher robustness to noise seem to partially explain their better prediction performance when entropic information is extracted from the series. Then, to demonstrate that permutation entropy is computationally efficient, robust to outliers, and effective to measure complexity of time series, computational results are evaluated against several non-linear ANN predictors proposed before to show the predictability of noisy rainfall and chaotic time series reported in the literature.



 
We propose a forecasting procedure based on multivariate dynamic kernels, with the capability of integrating information measured at  different frequencies and at irregular time intervals in financial markets. A data compression process redefines the original financial time series into temporal data blocks, analyzing the temporal information of multiple time intervals. The analysis is done through multivariate dynamic kernels within support vector regression. We also propose two kernels for financial time series that are computationally efficient without a sacrifice on accuracy. The efficacy of the methodology is demonstrated by empirical experiments on forecasting the challenging  S&P500 market
 

This study presents a neural network & web-based decision support system (DSS) for foreign exchange (forex) forecasting and trading decision, which is adaptable to the needs of financial organizations and individual investors. In this study, we integrate the back-propagation neural network (BPNN)- based forex rolling forecasting system to accurately predict the change in direction of daily exchange rates, and the Web-based forex trading decision support system to obtain forecasting data and provide some investment decision suggestions for financial practitioners. This research reveals the structure of the DSS by the description of an integrated framework, and meantime we find that the DSS is integrated, user-oriented by its implementation, and practical applications reveal that this DSS demonstrates very high forecasting accuracy and its trading recommendations are reliable.



 
Noise injection is an off-the-shelf method to mitigate over-fitting in neural networks (NNs). The recent developments in Bernoulli noise injection as implemented in the dropout and shakeout procedures demonstrates the efficiency and feasibility of noise injection in regularizing deep NNs. We propose whiteout, a new regularization technique via injection of adaptive Gaussian noises into a deep NN. Whiteout offers three tuning parameters, offering flexibility during training of NNs. We show that whiteout is associated with a deterministic optimization objective function in the context of generalized linear models with a closed-form penalty term and includes lasso, ridge regression, adaptive lasso, and elastic net as special cases. We also demonstrate that whiteout can be viewed as robust learning of NN model in the presence of small and insignificant perturbations in input and hidden nodes. Compared to dropout, whiteout has better performance when training data of relatively small sizes with the sparsity introduced through the l1 regularization. Compared to shakeout, the penalized objective function in whiteout has better convergence behaviors and is more stable given the continuity of injected noises. We establish theoretically that the noise-perturbed empirical loss function with whiteout converges almost surely to the ideal loss function, and the estimates of NN parameters obtained from minimizing the former loss function are consistent with those obtained from minimizing the ideal loss function. Computationally, whiteout can be incorporated in the back-propagation algorithm and is computationally efficient. The superiority of whiteout over dropout and shakeout in training NNs in classification is demonstrated using the MNIST data.

 
While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this paper, we extend this idea to allow the training of a student that is deeper and thinner than the teacher, using not only the outputs but also the intermediate representations learned by the teacher as hints to improve the training process and final performance of the student. Because the student intermediate hidden layer will generally be smaller than the teacher's intermediate hidden layer, additional parameters are introduced to map the student hidden layer to the prediction of the teacher hidden layer. This allows one to train deeper students that can generalize better or run faster, a trade-off that is controlled by the chosen student capacity. For example, on CIFAR-10, a deep student network with almost 10.4 times less parameters outperforms a larger, state-of-the-art teacher network.
 

The Foreign Exchange Market is the biggest and one of the most liquid markets in the world. This market has always been one of the most challenging markets as far as short term prediction is concerned. Due to the chaotic, noisy, and non-stationary nature of the data, the majority of the research has been focused on daily, weekly, or even monthly prediction. The literature review revealed that there is a gap for intra-day market prediction. Identifying this gap, this paper introduces a prediction and decision making model based on Artificial Neural Networks (ANN) and Genetic Algorithms. The dataset utilized for this research comprises of 70 weeks of past currency rates of the 3 most traded currency pairs: GBP\USD, EUR\GBP, and EUR\USD. The initial statistical tests confirmed with a significance of more than 95% that the daily FOREX currency rates time series are not randomly distributed. Another important result is that the proposed model achieved 72.5% prediction accuracy. Furthermore, implementing the optimal trading strategy, this model produced 23.3% Annualized Net Return.