Machine learning in trading: theory, models, practice and algo-trading - page 1239

 
Maxim Dmitrievsky:

I have a smaller class. The error rate is 0.14\0.4

14% to 40% on the test?

 
Grail:

14% to 40% on the test?

well, traine 14 test 40

 
Vizard_:

accuracy and classification error are different things. accuracy. and % of sample(test-train).

It's clear in principle - it overclocks on the train... 0.6 accuracy on this dataset on the test (20% sample) will do...

classification error so far... long redo there ))

I have 20% of training in alglib, 80% of OOB, and I did the same here

I did it in python and got the following

score(X,y,sample_weight=None)[source]

Returns the mean accuracy on the given test data and labels.


I don't understand it at all, I only figured it out in python today, I'll do it tomorrow. If the test and the trace is 50%, then it's like this


 
Vizard_:

My dataset (accuracy) on this one is
trend(80% sample) = 0.619
test(20% of sample) = 0.612 ROS

A little bit of a pick, less in the forehead. That's how you do it, not 20% of the test)))

50% is not enough, 300 observations is nothing.

What was the training?
 
elibrarius:
Learning by 20% is something new))

the kid's way ) there seems to be not much change in the error, that's why I did so, strong regularization in short

 
Maxim Dmitrievsky:

the kid's way) it seems that the error there did not change much, that's why I did so, a strong regularization in short

How could it not have changed? There is a big difference between 14 and 40%.
A 60-60 on 60 like a Wizard - that's right!

 
Vizard_:

The same thing I did with Doc and Toxic, Python doesn't have it...I won't tell you about it...

At least the woods or the nets?
 
elibrarius:

How could it not? 14 and 40% is a big difference.
60 over 60 like wizard - that's it!

Well, we'll check out all the models that are available in python... I mean, the ones that are in circulation... not until tomorrow... no, not at all

Maybe you need some more preprocessing.
 

Don't you understand that you can't make any money on forex?)

It's easier to become a programmer and get a high-paying job than to engage in this masochism.

 

In short, classification error and logloss are in the alglib... The logloss does not make any sense at all, the classification error in the forest falls to zero at the training sample>0.8 and the oob 0.2

All kinds of rmse are not good for classification

That's why I took a small training sample, so there would be some kind of error, but it's still small. I don't know how to compare with Python's

Reason: