Machine learning in trading: theory, models, practice and algo-trading - page 592
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Yes right, just went through all sorts of articles to see what is interesting in this topic :) Well, the main advantage over MLP, as I understand it, - speed and minimum settings (here they are not at all) and the fact that these grids are almost not retrained
Well, and Gaussian f-me is used, not the Student's f-me. A vertex density f is created for each input, then the results at the output are linearly summed up
By the way, PNN and GRNN are available in mql-view, but I haven't tried them yet and haven't compared them with MLP
https://www.mql5.com/ru/code/1323
Well, finally spit on these MKL tricks. There is a professional software, tested by thousands of users, and use it. Imho.
I agree.
If I were Maxim, I would put all his interesting findings in the form of articles, and for specific grail drainage I would use Visim or something like that.
I support you.
If I were Maksim, I would write all his interesting findings as articles and use Wissim or something like that for specific grail drainage.
Well, finally give up on these MKL handicrafts. There is a professional software, tested by thousands of users, and use it. Imho.
I make my own, purely for the TC :) there will even be memory elements (delays), like recurrence (a little) :) there everything is simple, I mean make any grid architecture, more difficult to make a solver like backprop, but you can in the optimizer if the weights are not much
this is just an example, you may look at the code and the NS
I make my own, purely for TC :) there will even be memory elements (delays), like recurrence (a little) :) there everything is simple, I mean make any grid architecture, more difficult to make a solver like backprop, but you can in the optimizer if the weights are not much
this is just an example, you can look at the code to see how the backprop is implemented and the NS itself
Well, imho, you don't have to be a radio amateur - it's a different time. You and I won't make it professionally.
I'm repairing satellite communications systems with a friend. And almost the only one in the Russian Federation. Well, you can never make (I mean make) such a thing... The time of radio amateurs has gone irrevocably.
Well, imho, you don't have to be a radio amateur - it's a different time. Neither you nor I can do it professionally anymore.
My friend and I are repairing satellite communication systems. And almost the only one in the Russian Federation. Well, you can never make (I mean make) such a thing... The time of radio amateurs is gone. irrevocably.
now all the robots are making :) you have to make robots to make robots that make things
I understand, I just have a few ideas, it's kind of creative... there's no specific task how to do it right
now everyone is making robots :) we need to make robots to make robots that make things
I understand, I just have a few ideas, it's kind of creative
I'm not talking about creativity. But use professional software in it, not handicrafts. But. do not insist. It's up to the afftor).
I posted above a link to PNN in Python. I guess it didn't work.)
Focused Forward Propagation Networks with Time Delay
In structural pattem recognition it is common to use static neural networks. In contrast, temporal pattem recognition requires processing images that change over time, and generating a response at a particular point in time, which depends not only on the current, but also on several previous values.
Are there such architectures? :) Exactly the type of such architectures will work in Forex, in theory... but you have to experiment. Easy to do, just add a couple of "interesting" neurons to MLP, or combine 2 models.
Just take PNN instead of MLP, and twist the rest on top and on the sides