Discussing the article: "The Disagreement Problem: Diving Deeper into The Complexity Explainability in AI"
Unfortunately, the explanations work only for stationary dependencies, when the importance of predictors does not change over time. Such predictors are easy to obtain for flowers, number of petals, etc., but difficult to obtain for financial markets.
From the models I've trained, even just the current state of market is enough data , adding the OHLC and Bollinger Band Readings doesn't improve the accuracy or stability by much. In the screen shot above I trained an LDA classifier to predict the next state of the security, the major draw back with this approach is that intepretibaility can be lost along the way, for example if the model predicts the price will remain in state 1 we don't know if price is going up or down, we only know where price is going if the system predicts a change in state, from 1 to 2. That's the only solution I can think of so far, creating new targets using the data we have so that we know the relationship exists, we created it ourselves.
Indeed, it is very difficult to get predictors for a set of financial data, and the only solution I can think of is to use the available data to create a new target, and then we will have all the predictors for the new target. For example, if we apply Bollinger Bands to a chart, price can be in 4 states. Completely above the Bollinger Bands, between the upper and middle bands, above the lower band but below the middle band, or completely below the band. If we define these states as 1, 2, 3, 4, we can predict future market states with more accuracy than changes in price itself.
From the models I have trained, even just the current market state is enough, adding OHLC and Bollinger Bands readings doesn't improve accuracy and stability much. In the screenshot above, I have trained an LDA classifier to predict the next state of the security. The main drawback of this approach is that integrity can be lost along the way, for example, if the model predicts that the price will stay in state 1, we don't know if the price is going up or down, we only know where the price will go if the system predicts a state change, from 1 to 2.This is the only solution I can offer at the moment, is to create new targets based on the data we have, so we know that the link exists, we created it ourselves.
You should read the code of these miserable, prehistoric BBs, yet there are in \MQL5\Indicators\Examples\BB.mq5. Mouldy, dreary masks again, trying to calculate some Standard Deviation....
You should read the code of these miserable, prehistoric BBs, yet there are in \MQL5\Indicators\Examples\BB.mq5. Mouldy, dreary masks again, trying to calculate some Standard Deviation....
I once tried reading the code for the RSI indicator in the Examples path you specified, and to be honest with you, I found it challenging to read, and I'm not sure whether I fully internalized what all the code is doing.
Do you think maybe modern indicators like the Vortex Indicator may have overcome some of the limitations of the classical indicators? Or maybe the problem is inherent to technical indicators because most of them rely on some parameter that needs to be calculated and optimized under considerable noise?
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
You agree to website policy and terms of use
Check out the new article: The Disagreement Problem: Diving Deeper into The Complexity Explainability in AI.
Dive into the heart of Artificial Intelligence's enigma as we navigate the tumultuous waters of explainability. In a realm where models conceal their inner workings, our exploration unveils the "disagreement problem" that echoes through the corridors of machine learning.
The disagreement is an open area of research in an interdisciplinary field known as Explainable Artificial Intelligence (XAI). Explainable Artificial Intelligence attempts to help us understand how our models are arriving at their decisions but unfortunately everything is easier said than done.
We are all aware that machine learning models and available datasets are growing larger and more complex. As a matter of fact, the data scientists who develop machine learning algorithms cannot exactly explain their algorithm’s behaviour across all possible datasets. Explainable Artificial Intelligence (XAI) helps us build trust in our models, explain their functionality and validate that the models are ready to be deployed in production; but as promising as that may sound, this article will show the reader why we cannot blindly trust any explanation we may get from any application of Explainable Artificial Intelligence technology.
Author: Gamuchirai Zororo Ndawana