Machine learning in trading: theory, models, practice and algo-trading - page 563

 
Maxim Dmitrievsky:

Whew... read the whole thread from the beginning to my appearance... now I've seen everything

but did not find the grail there ... too bad, I'll keep digging mine then

This is the right decision. The NS theory here on the forum is far from ideal.
 
Alexander_K2:
This is the right solution. NS theory here on the forum is far from ideal.

The only thing I wrote down was about the ternary classifier, and solving the mystery ofYuriy Asaulenko

 
Alexander_K2:
This is the right decision. The theory of NS here on the forum is far from ideal.
And it is not needed here at all. The whole theory has already been laid out long before us. )
 
Maxim Dmitrievsky:

The only thing I wrote down was about the ternary classifier and solving the mystery ofYuriy Asaulenko

This is the first time I'm going to fall apart in bows - Yury is not as simple as he seems. I understand what he does - it's like he has two parallel processes going on. One, probabilistic, states that it seems to be time for a deal. The second - the NS - gives or refuses. I won't say anything else - let him tell it himself.
 
Maxim Dmitrievsky:

The only thing I wrote down was about the ternary classifier, and to solve the mystery ofYuriy Asaulenko

And where did you find the mystery.

MLP is ~60 neurons. The algorithm is standard BP. Learning - go hither and thither. i.e. I don't know what the NS is learning there. In addition, all the principles of learning are outlined in classical monographs - Heikin, Bishop. Soft - not MQL.

The basic principles are, in my opinion, described in this thread.

 
Yuriy Asaulenko:

And where did you find the mystery.

MLP is ~60 neurons. Algorithm - standard BP. Learning - go where I don't know where. i.e. I don't know what the NS learns there. In addition, all the principles of learning are outlined in classical monographs - Heikin, Bishop. Soft - not MQL.

The basic principles are outlined in this topic.


This was kind of a joke :))

 
Maxim Dmitrievsky:

that was kind of a joke :))

No. There's really nothing else there. You think that Haykin and Bishop are hopelessly outdated and are looking for something new (you wrote it earlier). They are quite enough for me.
 
Yuriy Asaulenko:
No. There's really nothing else there. You think that Haykin and Bishop are hopelessly outdated and search for something new. They're good enough for me.

No, I mean it's like I was joking... you're the only one in the thread who came up with something in the end :)

you need to google perceptron training by the monte carlo method.

In general, this method is very similar to RL (reinforcement learning) when there is a learning agent and the NS is learning to find the best solution

 

This is how Alpha Go is trained (although it was previously assumed that it was a creative game and a machine could not beat a human in it)

And here's the winner.

https://techfusion.ru/nejroset-alphago-pereveli-na-samoobuchenie/

Нейросеть AlphaGo перевели на самообучение
Нейросеть AlphaGo перевели на самообучение
  • 2017.10.20
  • techfusion.ru
Искусственный интеллект AlphaGo показал прекрасные результаты самообучения: за три дня нейросеть от уровня начинающего игрока в «Го» дошла до уровня профессионала, одерживающего только победы Разработчики DeepMind усовершенствовали искусственный интеллект AlphaGo. Новая модель ИИ AlphaGo Zero обучалась «с нуля» без участия человека, играя сама...
 
Maxim Dmitrievsky:

No, I mean it's like I was joking... you're the only one in the thread who came up with something in the end :)

you need to google perceptron training by the monte carlo method.

In general, this method is very similar to RL (reinforcement learning) when there is a learning agent and the NS is learning to find the best solution

By the way, it's largely thanks to you. When I just started, it was you who gave me the link to Reshetov's article. The article, in general, is nothing, rather as an example of application, but it became approximately clear where to harness the horse.

I do not know, whether there are such methods in Google, as I myself have finally come to Monte Carlo.

I don't know about RL either, but from your brief description it sounds like my methods.

I found Monte Carlo in Google -https://logic.pdmi.ras.ru/~sergey/teaching/mlbayes/08-neural.pdf Only this is completely different.