Machine learning in trading: theory, models, practice and algo-trading - page 1374

 
elibrarius:

I.e. alignment is picked up by the method Vladimir suggested?

I guess that's not exactly what I thought it was. I don't know, I didn't use it, I have to read the help.

Is there anywhere to read about these parameters? Why guess?
 
elibrarius:

These weights can be fed not only in the boosting, but also in the case of the forest and in the NS. Apparently the methodology is common to all MoD systems.
The first experiment with decreasing the influence of old data showed no improvement.

When training on 30000 lines the test looks better than when training on 80000 lines. At 80000 and the transactions are less and the error is higher. I tried to decrease the weight proportionally (from 1 for fresh to 0.5 for old) - the results are almost the same.


Apparently this is still for alignment of dispersion, as pointed out by Maxim, by the method stated by Vladimir

Yes, I'm talking about the feasibility of decreasing the weights of those rows where we need to move the predictor partitioning to a lower level of the tree. Just giving weights to newer data might make sense if we think the market has changed...

 
Maxim Dmitrievsky:
Is there anywhere to read about these parameters?

Didn't pay attention to this thing before

 

I made a prototype of the TS on the NS. Opening of the trade according to the forecast of the NS, the forecast time - 5 m. Closing of the transaction in 5 min after the opening. There is no monitoring of the deal.

Here is the first result:

x is a trade number, y is profit in pips. Commissions etc. are not considered. The interval of the test is 3.5 months.

There is no need to trade up to the 60th trade as it is before the previous futures closing, the forecast is not very much possible there. The sharp jumps are, I suspect, the intraday gaps.

Well, and Python code. It couldn't be simpler.

def Long(i): # сделка Long
    print('long')
    profLS.append(SD.history[i+5][c.c] - SD.history[i][c.c] )
    return i + 5

def Short(i):  # Сделка Short
    print('short')
    profLS.append(SD.history[i][c.c] - SD.history[i+5][c.c] )
    return i + 5

while i < LenHist:
    x = []
    for j in range(0, 20): #Подготовка данных для НС
        x.append((SD.history[i-j][c.c]/SD.history[i][c.c]-1)*1000)
    out = MLP.Predict([x]) # запрашиваем прогноз НС
    if out >= 3.0:
        i = Long(i)       
        tmp.append('L')
    elif out <= -3.0:
        i = Short(i)        
        tmp.append('S')
    i += 1
 
elibrarius:

Didn't pay attention to this thing before

I couldn't find it on the xgboost website, well there's a section "tuning parameters" and it's all about bias-variance trade-off

It's kind of similar to what I was thinking.

don't use it, just wondering

 
Yuriy Asaulenko:

Made a prototype of the TS on the NS. Closing the deal 5 min after opening (prediction time). There is no monitoring of the deal.

Here is the first result:

For x - trade number, for y - profit in pips. Commissions etc. are not considered. The test interval is 3.5 months.

There is no need to trade up to the 60th trade as it is before the previous futures closing, the forecast is not very much possible there. The sharp jumps are, I suspect, the intraday gaps.

Well, and Python code. It couldn't be simpler.

It looks interesting taking into account the overnight closing and not opening at 10:00?

 
Aleksey Vyazmikin:

It looks interesting, but do you take into account closing for the night and not opening at 10:00?

Nothing is taken into account. Continuous flow of history Close directly to the NS. We open according to the NS forecast and close in 5 meters.

 
Maxim Dmitrievsky:

I couldn't find such a setting on the xgboost website, well there is a section "tuning parameters" and there everything about bias-variance trade-off

It's kind of similar to what I was thinking.

I don't use it, I just wondered

In the PDF description of the package for the xgb.train feature it says:

weight - a vector indicating the weight for each row of the input.

And that's it.

The ELM has the same one. I've seen it somewhere else.

 
elibrarius:

In the PDF description of the package for the xgb.train function it says:

weight - a vector indicating the weight for each row of the input.

And that's all.

I asked a guy who does boosting, he'll tell me later.

 
Yuriy Asaulenko:

Nothing is accounted for. A continuous stream of history Close directly on the NS. We open according to the forecast, in 5 m we stupidly close.

I've got good results on them when training, but it turned out that it was impossible to close at 10:00 on a stop without pulling through, which skewed the result greatly.