Using artificial intelligence at MTS - page 8

 

2 SergNF.

I'm as far from the topic as the South Pole. I only got hooked on eugenk's post, but no one supported it. And when I decided to watch the expert, so long and strained to figure out where the AI is and how to teach it. :-))

But then, when even elementary questions were getting out of hand, I couldn't resist and got in the way. :-)

The technology here, unfortunately, has been discussed very little. Mostly just the expert. But the technology is certainly interesting. I got food for thought. So the topic was very useful for me.

This beads are a thin thing, it cannot be applied to forex. :-)))

 
I can't watch anymore. It's all nonsense. Sorry to be blunt. SergNF, where can I get beads?
 
SergNF:

You take your favourite indicator (if external, through iCustom) and output its value to a file for a certain amount (how much you want to predict in the future) BACK and, there are options, Close/High/Low "at the first bar" or Highest/Lowest for this interval. You can analyze and think about how to apply it by reading parallel articles from http://www.tora-centre.ru/library/ns/spekulant04.htm, http://www.tora-centre.ru/library/ns/spekulant03.htm
Thanks a lot, SergNF, for the links. I dabbled with NS more than a year ago myself (in TradingSolutions and rather directly): Using Jordan-Elman network I tried to predict the high-low for a day ahead, attaching different MAs to entry. Primitive, of course, but here I also made a number of useful conclusions for myself, having constructed dozens of very different and curious MAs along the way...

I didn't think about any neural network classifications and Kohonen maps then - and made a premature conclusion, that the NS were of little use, and then started experimenting with GA. I think my path is typical of most traders who look for the Grail in NS - without seriously studying them. It seems now, in Elliott terms, we can say that I have successfully passed the phases of the 1st wave (trial one-sided attack without serious preparation) and the 2nd wave (deep cooling down) in dealing with NS. It's time for the Third Wave, hehe...
 
sashken:
Pyh wrote:

P.S. I agree with Yurixx's opinion. Rudeness should not be tolerated, although the expert should be acknowledged as very curious.
You have not convinced me. I understand very well that testing is by bar opening prices, BUT ! It opens a bar and we have to find (for this EA) the AC value at four points, including the AC value of the bar which has just opened. Where do we get AC if it is formed only at the closure of the bar?

You yourself write that the bar opened, so there is an opening price of the bar. It (the open price of the bar) will not change during the bar formation (may change High, Low and Close, but Open - no, because the bar is already open).

I hope it is clear:)
It is not clear to them. Many do not realize that when Volume[0] == 1 (i.e. the first tick of a new bar), then Сlose[0] == Open[0] == High[0] = Low[0], i.e. the last bar close price has already been formed and will vary by tick until the bar closes. And from this very elementary illiteracy and claims about the supposedly "fitting" quality of testing.

We just need to write on the foreheads of all the fate dissatisfied lamers with indelible paint (or better yet, burn it out with hardened iron) that: "Slose[0] is the Bid of the last tick that came to the terminal, not the telepathic ability of the strategy tester".
 
eugenk1:
Guys, I found what Reshetov did very interesting. Of course, there's no artificial intelligence to speak of. AI is necessarily adaptation and training, at least of a neural network, at least of a linear filter. But I think we should rather speak about the group behavior of indicators. Each of them is assigned a weight reflecting its importance and usefulness. And there is a weighted "voting" - summation. The only thing I would take for 4 indicators 14 parameters instead of 4, to account for all possible combinations of parameters. I think it is possible to build a real adaptive system this way. We take normalized (about which I wrote above) indices and estimate the quality of each of them by virtual trades. A lying trader is punished with decreased weight (up to negative, which means "interpret my signal exactly in the opposite direction"), while a well functioning one is rewarded with increased weight. By the way this system really deserves the title of intelligent... If you take 10 symbols instead of 4, the number of all possible combinations will be 1023. What human mind is able to analyze such a mountain! And the system can...
This approach is called adaptive, although the classic learning algorithm is quite different, i.e. when:
  • neuronka lies, then it is "stubbed": w[i] i= a[i] for all inputs denoted by i;
  • the neuron has given a correct answer, then it gets a "carrot": w[i] += a[i] for all inputs denoted by i;
Then it is checked for lousiness and if it lies again, it is "whipped" again on the bare ass until it stops lying.
There is even a theorem, I do not remember whose name, which proves that this algorithm has convergence, i.e. sooner or later it will find an acceptable equation of the separateness plane, but only in that case if identifiable objects are linearly separable in the feature space of these objects.

But identification by buy and sell is not linearly separable, so the neural network still makes errors, even if you run it through the pulse with classical training algorithms.
 
And yet - if the program itself decides when to open/close, then by definition it has artificial intelligence.
And the process of optimisation of the Expert Advisor is the training of this system.
 
Itso:
And yet - if the program itself decides when to open/close, then by definition it has artificial intelligence.
And the process of optimisation of the Expert Advisor is the training of this system.
Even if the trader decides when to open and where to open, and when to close, this does not mean that it has intelligence. The moron can be trained to press the buttons. But this moron's decision to press buttons will not be intelligent from a trading point of view, but subjective (for example, the colour of buttons will be subjectively more attractive to make a decision, not the equity value).
 
Mathemat писал (а):
I dabbled with NS myself more than a year ago (in TradingSolutions and in a fairly straightforward way): I tried to forecast the high-low a day ahead with different MAs on the input, using the Jordan-Elman network.

Exactly, I was just messing around with neurons. If you were dealing with it seriously you would have known a mathematically well-founded fact: "Neural networks are suitable for identification, but they are absolutely inappropriate for extrapolation", and consequently they cannot predict any values for any period forward - the results will be plus or minus kilometres. But it is possible in many cases to identify which object belongs to which class with a certain degree of reliability.

For example, we can try to find out the most profitable pose (buy or sell) by the values of the indices and oscillators. And it may work, because the task has identification. But if you try to use neuronics to calculate where take profit should be of those very poses, you may succeed in tests, but outside the sample it is unlikely, because take profit value is extrapolation - the price should at least touch it (to determine the targets it is probably better to use fuzzles).

Simply put, you were trying to drive nails into concrete walls with a TV.

More detailed conclusions and mathematical calculations which were made based on the results obtained after the completion of the Perceptron project can be read in the book:

Minsky, M and Papert, S (1969) The PERCEPTRON; an Introduction to Computational Geometry, MIT Press, Massachusets

translation available:

Minsky M., Papert S. The Perseptron: Translated from English: Mir, 1971. - с. 261

My advice, children, before fooling around, and before making public much-important conclusions based on the results of fooling around, try first to study the materials on the subject. Firstly, it won't make any harm, and secondly, it will allow you not to step on rake, which everybody knows for a long time.
 
Reshetov писал (а):
Minsky, M and Papert, S (1969) The PERCEPTRON; an Introduction to Computational Geometry, MIT Press, Massachusets

translation available:

Minsky M., Papert S. The Perseptron: Translated from English: Mir, 1971. - с. 261

My advice, children, before fooling around, and before making public much-important conclusions based on the results of fooling around, try first to study the materials on the subject. Firstly, it won't make any harm, and secondly, it will allow you not to step on rake, which everybody knows for a long time.
Thank you for pointing out the source. And what concerns the source material, I got acquainted - actually I did it through publications on neuro-forecasting. Such publications are still in progress and even claim the adequacy of neuro - despite your categorical verdict about uselessness of neuro for interpolation tasks(Reshetov, exactly inter-, not extrapolation; you should know it for sure, if you speak so smartly about linear separability... By the way, if I'm not mistaken, Minsky's theorem about unsolvability of linear non-separability problem (XOR, say) by perceptron really cooled down interest to neuro - but only until they thought about multilayer networks).
 
Mathemat:
Reshetov:
Minsky, M and Papert, S (1969) The PERCEPTRON; an Introduction to Computational Geometry, MIT Press, Massachusets

available translation:

Minsky M., Papert S. Perseptrons: Per. - с.

My advice to you, little fellows, before fooling around, and then drawing publicly significant conclusions from such fooling around, is to first familiarize yourself with the available literature on the subject. Firstly, it won't hurt, and secondly, it will allow not to step on a rake, about which all is already known for a long time.
Thank you for pointing out the source. Well, as for the matrix, I'm actually acquainted - by publications on the subject of neuro-prediction. Such publications are still in progress and even claim the adequacy of neuro - despite your categorical verdict about neuro's unsuitability for interpolation problems(Reshetov, exactly inter-, not extrapolation; you should know it for sure, if you reason so wisely about linear separability... By the way, if I'm not mistaken, Minsky's theorem about unsolvability of linear non-separability problem (XOR, say) by perceptron really cooled down interest to neuro - but only until they invented multilayer networks).
The articles are articles, but the geometric meaning goes nowhere. And it is that a linear filter allows to separate flies from cutlets, if coordinates (values of features) of these very objects are known with a linear plane under condition of linear separability. But there is no solution to the inverse problem, i.e. naming an object to a neuronke to find out its coordinates. All we can find out about the object is just on which side of the separating plane it is located. Therefore, interpolation and extrapolation are out of the question.