Machine learning in trading: theory, models, practice and algo-trading - page 512

 
elibrarius:

Question to those who have solved regression problems for NS:

What do you feed as a training value (and what then predict)?

There are options:

1) 1 output with increment of price of the next bar (it seems uninteresting to me, because the increment of price of 1 bar is usually insignificant)

2) 10-20 outputs with price increments for 10-20 bars (seems to be resource- and time-consuming, but better accuracy)

3) 1 output with incremental price of an extremum along the zigzag (for example, an extremum will occur along the zigzag in 15 bars, give its price)

Which option is better in your opinion? Maybe there is something better?

If the forecast is made for several bars ahead (items 2 and 3), it is obvious that the probability of their fulfillment decreases. What number would be optimal?

I tried point 1, the model finds some regularities.

Alexey (the author of this topic) suggested to look not 1 bar ahead but several bars, selecting experimentally how far to look. In his case there was a classification, but in my case for regression this method also slightly improves the result compared to point 1. It is similar to point 2, but exactly with a bunch of exits I did not train the model, I just did one model for each new target.

I can not say for sure about the zigzag, I had nothing with it, but here in the thread they wrote that they, on the contrary, are good.

 
Elibrarius:
I wonder if there is sense in wasting time in MO if the forecast is about 50%. I think the same will be on the mashcats.
If there is no difference in the forecast, then why spend times more time on more complex things with the results of simple?

In general, before spending time on MO-NS, you should first decide why and for what they are needed.

I have made up my mind - as a supplement to classical TC, not as a substitute for classical methods. I have not got to the ready systems yet, but the results of models are very good.

 
Yuriy Asaulenko:

In general, before spending time on MO-NS, you should first decide why and for what they are needed.

I have made up my mind - as a supplement to classical TC, not as a substitute for classical methods. I haven't got to the ready systems yet, but the results of the models are very good.


It's as old as the mammoth egg and a correct statement. NS may be a supplement to TC, but not instead of it, and about the same number of zeros and ones in the classification data they spoke more than a year ago in this thread. You just don't want to listen....

 

Why does the dog need a fifth leg? That is, the TC of the NS.

 

I follow the topic, and I always see the same thing


 
Vitaly Muzichenko:

I follow the topic, and I always see one thing



it is the NEUROSET!

it "trains" the water - so that it "remembers" and goes away on its own.

After some time - the water will "learn" and scoop itself up.


This is how AI is created - you have to act against logic, then you will find a way.


And so I see.... he's obviously a true neuron player)

 

It is often written...the neural network itself makes things up - draws, creates... and it kind of defies human logic....


Let's say it is.

But amorphous - super intelligent - without its "Creator" is unable to create anything at all. "It" (the AI) needs imitation. Without this there will be no development.

Suddenly it will find an online brochure of ISIS Islamists on the net - and it will be the end of mankind.


So it is necessary to train the AI - within the framework of morality.

Otherwise the "Wahhabi" will be 10000000000 times more horrible.

 

I once tried to create an AI in Basic, back in the 1998s. On my computer -ZX_SPECTRUM.

 
 
Mihail Marchukajtes:

Come on!!!! I had one, but only in the early nineties, at the end it usually became a rarity, although I also wrote my first programs in BASIC....


well, everyone's got a batty.

Reason: