Machine learning in trading: theory, models, practice and algo-trading - page 337

 
elibrarius:
What about Chaos Hunter? Give me a specific link.

also interesting
 
Dr. Trader:

No one is going to spend months developing a strategy and then go bragging about it on a demo account. This kind of stuff is traded on a real account, and the transaction history is hidden from everyone. I even read on the forum how people purposely trade at two brokers, taking turns losing on one and compensating losses on the other, so that even the broker does not know which deals were made by the strategy and which ones were fake.

There are results. Sometimes found good combinations of predictors and the model bring a profit for a couple of months, more often - less. But they are replaced by others.


My personal opinion - neuronics, forests, regressions - all this is too weak for forex. The reason is that the price behavior is constantly changing, the rules that are profitable today could be unprofitable a week ago. And the standard approach - take indicators and price for a couple of months and train the neuronics - means that it must find the same rules of price behavior for all two months. And there are no such rules and nobody knows what it will find, but it will be wrong 99% of the time. Sometimes the model may get lucky and fall into that 1% but it is too far from the grail and such Expert Advisors usually trade well until the first stop loss and then may be discarded.

I am currently studying models of pattern recognition that look at price behavior after similar patterns on history and trade with these statistics.
In R I haven't seen a package that would do everything I need, I have a model piecemeal assembled from others, plus my own bikes. The closest I've seen to a model description in another thread, I'd advise starting to build your grail with this (quote below). New problems will emerge in the process, you'll have to think and experiment on them.


2 months is not enough because it's impossible to tell exactly when Kolyan will visit.

Success to all!

 
elibrarius:

If I am not mistaken, the RNN will be extremely difficult to implement in MT5, and for good results you need either to buy or develop your own, with huge labor costs.

And if in MLP, except for information about price, indicators on the current bar, we can transmit the same information for 10-30 previous bars, it will be a kind of memory. Part of neurons will process the current state, and another part will process the development of situation in the nearest past.


It won't work as it should anyway, there are quite different principles of work... MLP will simply classify predictors into buy/sell groups, if it can, and if it cannot, it will again produce mush at the output. I.e. you can use Random Forest instead of MLP, it will be the same, and you won't have to worry about it.
 
Maxim Dmitrievsky:

On OpenCL, not if you're not lazy ))

OpenCL seems to be able to read only on its own machine, not on the network. I'm afraid that 1 PC will not be enough.

I am looking towards frames, and overwriting ALGLIB with saving data of each pass to file, then every thousandth pass (or at end of training cycle epoch) this file is read and permission for agents (via file) to calculate next epoch is given. Although here I already see a problem - will remote agents be able to read file-permission....? need to figure it out. It seems not(.

Only the simplest version like https://www.mql5.com/ru/articles/497 will be able to separate calculations, but it's too simple, one-layer, and it's not clear how to train it with your own commands.

Нейронные сети - от теории к практике
Нейронные сети - от теории к практике
  • 2012.10.06
  • Dmitriy Parfenovich
  • www.mql5.com
В наше время, наверное, каждый трейдер слышал о нейронных сетях и знает, как это круто. В представлении большинства те, которые в них разбираются, это какие-то чуть ли не сверхчеловеки. В этой статье я постараюсь рассказать, как устроена нейросеть, что с ней можно делать и покажу практические примеры её использования.
 
elibrarius:

I am looking towards frames, and rewriting ALGLIB with saving data of each pass to file, then for example every thousandth pass (or at the end of training cycle-epoch) this file is read and permission is given to agents (via file) to calculate the next epoch. Although I already see a problem - remote agents will be able to read the file-permission.... we need to figure it out.

Computations can be distributed only for the simplest version, like https://www.mql5.com/ru/articles/497, but it's too simple, single-layer, and it is not clear how to teach it with your own commands.


There create several such neurons, and add additional weights to the incursions for connections between neurons (just like for weights between input layer and neuron), only there will be a lot of incursions. But you won't need an oppenzl, it will quickly calculate in the cloud

I.e. from the first neuron will have 5 connections to 5 neurons in the second layer, and from them another 5 connections to the output one, something like that

and it's trained in the optimizer by weight selection, then the best run from the optimizer to choose

 

Exactly as imagined )

Only I'm afraid that there will be not 5 neurons, but not less than 500 (if data for several bars are substituted, as memory analogy).

And what about training on manually set or zigzag commands? Is there no way to screw it in?

 
elibrarius:

Exactly as imagined )

Only I'm afraid that there will be not 5 neurons, but not less than 500 (if data for several bars are substituted, as memory analogy).

And what about training on manually set or zigzag commands? There's no way to screw it in?


Why, just on the output you feed it 0 or 1, depending on whether the zigzag rose or fell, that is, the input is given shifted back by n bars history, and the output - increased or decreased
 
Maxim Dmitrievsky:

Why, just on the output you feed it with 0 or 1, depending on whether the zigzag was rising or falling, i.e. the input is a shifted back n bars history, and the output is a forecast - whether it was rising or falling
If in this code https://www.mql5.com/ru/articles/497 we use substitution outputs instead of calculated outputs, then we will have the same result with any combination of input data - we will always use a pre-specified answer, after all. That is, there will be no learning.
Нейронные сети - от теории к практике
Нейронные сети - от теории к практике
  • 2012.10.06
  • Dmitriy Parfenovich
  • www.mql5.com
В наше время, наверное, каждый трейдер слышал о нейронных сетях и знает, как это круто. В представлении большинства те, которые в них разбираются, это какие-то чуть ли не сверхчеловеки. В этой статье я постараюсь рассказать, как устроена нейросеть, что с ней можно делать и покажу практические примеры её использования.
 
elibrarius:
If we use substitution outputs instead of computed outputs in this code https://www.mql5.com/ru/articles/497, we will have the same result for any combination of input data - we will always use the pre-specified answer, after all. I.e., there will be no learning.


so there will be different outputs

Oh, I see, there's no neuron here.)

 
Maxim Dmitrievsky:

so there will be different outputs

I don't understand your idea (

Reason: