Machine learning in trading: theory, models, practice and algo-trading - page 592

 
Maxim Dmitrievsky:

Yes right, just went through all sorts of articles to see what is interesting in this topic :) Well, the main advantage over MLP, as I understand it, - speed and minimum settings (here they are not at all) and the fact that these grids are almost not retrained

Well, and Gaussian f-me is used, not the Student's f-me. A vertex density f is created for each input, then the results at the output are linearly summed up

By the way, PNN and GRNN are available in mql-view, but I haven't tried them yet and haven't compared them with MLP

https://www.mql5.com/ru/code/1323

Well, finally give up on these MKL tricks. There is a professional software, proven by thousands of users, and use it. Imho.
 
Yuriy Asaulenko:
Well, finally spit on these MKL tricks. There is a professional software, tested by thousands of users, and use it. Imho.

I agree.

If I were Maxim, I would put all his interesting findings in the form of articles, and for specific grail drainage I would use Visim or something like that.

 
Alexander_K2:

I support you.

If I were Maksim, I would write all his interesting findings as articles and use Wissim or something like that for specific grail drainage.

VisSim? - Are you kidding? As Ellochka Shchukina used to say). I need softwares where everything is, which are Python and R. Although neither in one or the other is not a big expert, but judging by the Internet. and, in general, their own observations.
 
Yuriy Asaulenko:
Well, finally give up on these MKL handicrafts. There is a professional software, tested by thousands of users, and use it. Imho.

I make my own, purely for the TC :) there will even be memory elements (delays), like recurrence (a little) :) there everything is simple, I mean make any grid architecture, more difficult to make a solver like backprop, but you can in the optimizer if the weights are not much

this is just an example, you may look at the code and the NS

 
Maxim Dmitrievsky:

I make my own, purely for TC :) there will even be memory elements (delays), like recurrence (a little) :) there everything is simple, I mean make any grid architecture, more difficult to make a solver like backprop, but you can in the optimizer if the weights are not much

this is just an example, you can look at the code to see how the backprop is implemented and the NS itself

Well, imho, you don't have to be a radio amateur - it's a different time. You and I won't make it professionally.

I'm repairing satellite communications systems with a friend. And almost the only one in the Russian Federation. Well, you can never make (I mean make) such a thing... The time of radio amateurs has gone irrevocably.

 
Yuriy Asaulenko:

Well, imho, you don't have to be a radio amateur - it's a different time. Neither you nor I can do it professionally anymore.

My friend and I are repairing satellite communication systems. And almost the only one in the Russian Federation. Well, you can never make (I mean make) such a thing... The time of radio amateurs is gone. irrevocably.


now all the robots are making :) you have to make robots to make robots that make things

I understand, I just have a few ideas, it's kind of creative... there's no specific task how to do it right

 
Maxim Dmitrievsky:

now everyone is making robots :) we need to make robots to make robots that make things

I understand, I just have a few ideas, it's kind of creative

I'm not talking about creativity. But use professional software in it, not handicrafts. But I do not insist. It's up to the afftor).
 
Yuriy Asaulenko:
I'm not talking about creativity. But use professional software in it, not handicrafts. But. do not insist. It's up to the afftor).
I posted above a link to PNN in Python. I guess it didn't work)
 
Aleksey Terentev:
I posted above a link to PNN in Python. I guess it didn't work.)
I got it. But you are still talking about MCL. That's what I mean. It doesn't work, as long as we're dealing with DM. I don't think so.
 

Focused Forward Propagation Networks with Time Delay

In structural pattem recognition it is common to use static neural networks. In contrast, temporal pattem recognition requires processing images that change over time, and generating a response at a particular point in time, which depends not only on the current, but also on several previous values.

Are there such architectures? :) Exactly the type of such architectures will work in Forex, in theory... but you have to experiment. Easy to do, just add a couple of "interesting" neurons to MLP, or combine 2 models.

Just take PNN instead of MLP, and twist the rest on top and on the sides