You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Alexandr Andreev:
.........
the total subtracted from the unit i.e. the closer to 0 the total the better the results..... in other words so far not much results as 0.75 is your 75, although it depends on what to compare it with..... the worst score would be 1 (100%) the best score 0
You have to understand that a score of 90 is ten times better than a score of 99.... a score of 99 is ten times better than a score of 99.9... 100 is in fact possible only when all modules have an error score of 100... i.e. a score of 0.1 is ten times worse than a score of 0.01. At the same time, a score of 10 is ten times worse than a score of 1.
.........
I don't understand the logic of.... at all If the module produced an error on unlearned data of 4.43% then 100 - 4.43 = 95.57% is the percentage of error-free answers. Why should this percentage be worse than 95.01%? What don't I get?
It is probably better to get the sum of squares for module errors, and extract the root.
That way we get an overall estimate of the modulus errors.
The closer to zero the value is, the better.
So it goes like this.
The estimate shows that Mod5 has the smallest error.
Thanks, but that's not it. I introduced a criterion for myself - any module showing an error percentage higher than 30 is simply discarded.
And the task is not to find out which module has the least number of errors, but with what parameters all the modules will give a more "even" result.
In the first post I gave a table with results when changing only one parameter, the last one. And if I run the script with other parameters changed, the table will be much larger. You can't look at all values, and the average error, I think, doesn't say much...
Forum on trading, automated trading systems and trading strategy testing
I am asking for practical advice.
Sergey Tabolin, 2020.06.06 17:18
My question is: How do I evaluate results correctly?
Error of each module is given in percents. 0% is an ideal result.
I want to have minimal error of each module but scatter to be minimal as well.
Forum on trading, automated trading systems and trading strategies testing
I am seeking practical advice.
Sergey Tabolin, 2020.06.07 08:00
I don't understand the logic of.... at all If the module error is 4.43% on unlearned data, then 100 - 4.43 = 95.57% is the percentage of error-free answers. Why should this percentage be worse than 95.01%? What don't I get?
Thanks, but that's not it. I defined a criterion for myself - module with error percentage greater than 30 are simply excluded from work.
And the task is not to find out which module has the least number of errors, and with what parameters all the modules will give a more "even" result.
In the first post I gave a table with results when changing only one parameter, the last one. And if I run the script with other parameters changed, the table will be much larger. You can't look at all the values, and the average error, I think, doesn't say much...
The error minimization estimation is used to determine the appropriate model.
And what parameters to use for the model, how do we know your algorithm and its parameters, and all the more the way of finding them?
The way to find them must be consistent with the model you built.
Find the maximum, calculate the average and, depending on the average, adjust the maximum. And then choose by the minimum maximum. You have to come up with a formula for correcting the maximum, and there should be a coefficient. And the value of the coefficient should be picked up mentally.
Purely and simply, multiply the maximum by the average and multiply by the coefficient. By changing the coefficient, see which variant becomes the best - that's how you pick up the coefficient.
The error minimization evaluation is used to determine the appropriate model.
And what parameters to use for the model, how do we know your algorithm and its parameters, and all the more the way of finding them?
The way to find them must be consistent with the model you built.
I apologise if I didn't express myself correctly ))))
I meant by "results" three rows of the table, three results. The result is the percentage of wrong answers for all 15 modules.
Another option. Off-topic, but also a way. Don't look at percentages, but like a rating. Each column is a whole number from 1 to 3 (or 1, 1, 2, etc.). Then calculate the average rating.
Another option. Do a two-step selection. Choose several ones with the best mean, and from them choose the one with the best maximum. Or vice versa, select a few with the best maximum, and from them select one with the best average.
I apologise if I didn't express myself correctly ))))
I meant by "results" three rows of the table, three results. The result is the percentage of wrong answers of all 15 modules.
So it's not by modules, but by layers?
Change the form of the matrix ModN[3][15]
;))
I don't understand the logic of.... at all If the module error is 4.43% on raw data, then 100 - 4.43 = 95.57% is the percentage of error free answers. Why should this percentage be worse than 95.01%? What don't I get?
Thanks, but that's not it. I defined a criterion for myself - module with error percentage greater than 30 are simply excluded from work.
And the task is not to find out which module has the least number of errors, and with what parameters all the modules will give a more "even" result.
In the first post I gave a table with results when changing only one parameter, the last one. And if I run the script with other parameters changed, the table will be much larger. You can't look at all the values, and the average error doesn't seem to tell you much...
I don't understand the logic of.... at all If the module produced an error of 4.43% on raw data, then 100 - 4.43 = 95.57% is the percentage of error-free answers. Why should this percentage be worse than 95.01%? What don't I get?
Thanks, but that's not it. I defined a criterion for myself - module with error percentage greater than 30 are simply excluded from work.
And the task is not to find out which module has the least number of errors, and with what parameters all the modules will give a more "even" result.
In the first post I gave a table with results when changing only one parameter, the last one. And if I run the script with other parameters changed, the table will be much larger. You can't look at all the values, and the average error, I think, doesn't say much...
.............
It's not about errorfree.
For example we have two error responses of 0.2 and 0.0000001. 0.00000002 (especially problems will occur if one of the estimates is just 0) - which is quite inconvenient to visually estimate the number of these zeros. So it's easier to reflect by making the best score 1.... we just get 1-0.2 + 1 -0.00000001 . 0.8 and 0.99999999 ... it is clear that by multiplying these values we end up with 0.8 total quality...... if both scores were 0.8 then the answer would be 0.64.... This option is the simplest.
It's easier to do and see the total
I don't understand the logic of.... at all If the module error is 4.43% on raw data, then 100 - 4.43 = 95.57% is the percentage of error free answers. Why should this percentage be worse than 95.01%? What don't I get?
Thanks, but that's not it. I defined a criterion for myself - module with error percentage greater than 30 are simply excluded from work.
And the task is not to find out which module has the least number of errors, and with what parameters all the modules will give a more "even" result.
In the first post I gave a table with results when changing only one parameter, the last one. And if I run the script with other parameters changed, the table will be much larger. You can't look at all values, and the average error, I think, doesn't say much...
I've been following you for a long time. Interesting personality. I respect you.
In any context, historical data can only be used in conjunction with the current situation. This is important. Historical data, no matter how good it is, is negative. What is my point? Market prices are not a projectile following a certain trajectory.
I don't understand the logic of.... at all If the module error is 4.43% on raw data, then 100 - 4.43 = 95.57% is the percentage of error free answers. Why should this percentage be worse than 95.01%? What am I not getting?
There these error-free answers were multiplied and the total was again subtracted from 100%... so there was a reverse translation and it was about