Machine learning in trading: theory, models, practice and algo-trading - page 1418

 
Yuriy Asaulenko:

Stop clowning around, clown.

Give me some proof when you're talking bullshit, your drawings with a bunch of dots have already been discussed, what's next?

what methodology else, did someone miss something?

You're the clown that's the 2 of us, go miss your gone circus somewhere else, "sir".

no one is interested in your posts, they are not funny, not funny, not informative, nothing... anderstendish? if you're bored, it's your problem, go away then
 
Maxim Dmitrievsky:

Give me the proofs when you say another bullshit, your drawings with a bunch of dots have already been discussed, what's next?

what methodology else, did someone miss something?

You're the clown that's the 2 of us, go miss your gone circus somewhere else, "sir".

Your posts are of no interest to anyone, they are not funny, not informative, nothing... anderstendish? if you're bored, it's your problem, get out of here then

Have you tried to consult a psychiatrist?

 
Yuriy Asaulenko:
Forests and NS have enough intelligence to organize their own predictors within themselves, the ones that are most appropriate to the task.

Unless it's not just nonsense, it's a mockery to confuse you, to direct you to a slippery slope.

There is no "intelligence" in scaffolding and NS, they are quite prosaic algorithms, like sorting, maybe a little more complicated, like matrix reversal...

The result depends on predictors EXACTLY, something like the state of a chaotic system from the initial conditions.

Add some noise features, deform the feature space, and you're done.


I was wondering...

Have you ever submitted a net price to the forest? I have a feeling that you didn't, because a person who tried it wouldn't postulate such shit here...

 
Grail:

Unless it's not just nonsense, it's a mockery to confuse you, to direct you to a slippery slope.

There is no "intelligence" in scaffolding and NS, they are quite prosaic algorithms, like sorting, maybe a little more complicated, like matrix reversal...

The result depends on predictors EXACTLY, something like the state of a chaotic system from the initial conditions.

Add some noise features, deform the feature space, and you're done.


I was wondering...

Have you ever submitted a net price to the forest? I have a feeling that you didn't, because a person who tried it wouldn't postulate such crap here...

It's very easy to "confuse you and lead you down a slippery slope". ))

Of course I don't have any intellect, but I even have enough of it to form necessary predictors inside me from BP.)

Yes, in general, filed, and not only me, and not only in the forest, but also in the NS. From 20 to 50 samples. I am quite satisfied with the results. If we read the theory of NS, they are quite capable of such BP processing tasks. By the way, almost every package has instances which work directly with BP, you can see for yourself that there are no problems).

 
Yuriy Asaulenko:

You seem to be very easy to "confuse and steer on a slippery slope. ))

Of course I do not have intelligence, but even it is enough for independent formation of necessary predictors inside myself from BP.)

Yes, in general, filed, and not only me, and not only in the forest, but also in the NS. From 20 to 50 samples. I am quite satisfied with the results. If we read the theory of NS, they are quite capable of such BP processing tasks. By the way, almost every package has instances which work directly with BP, you can see for yourself that there are no problems).

Vasya, take a walk with such "knowledge", no one is interested in you here

When I write about increments, you talk about increments; when I write about prices, you talk about prices. You just hear the bell, but you don't know where it is. Tomorrow I will write about something else and you will be repeating it like a parrot for another year, you are as responsive as the woods.

 
Maxim Dmitrievsky:

Vasya, take a walk with such "knowledge", no one is interested in you here

When I write about increments, you talk about increments, when I write about prices, you write about prices. You only hear the bell, but you don't know where it is. Tomorrow I will write about something else, and you will repeat like a parrot for another year, your response - years, as the most dense forest.

You'd better see a psychiatrist. The symptomatology is obvious. Probably the acute phase. They won't cure you, but they will take the edge off.

 
Yuriy Asaulenko:

You seem to be very easy to "confuse and steer on a slippery slope. ))

Of course I do not have intelligence, but even it is enough for independent formation of necessary predictors inside myself from BP.)

Yes, in general, filed, and not only me, and not only in the forest, but also in the NS. From 20 to 50 samples. I am quite satisfied with the results. If we read the theory of NS, they are quite capable of such BP processing tasks. By the way, almost every package has instances which work directly with BP, you can see for yourself that there is no problem).

With all due respect (you get it...) but you very poorly understand how the MO works, at the level of obscurantism a la "If you put a program for unmanned driving to drive on the movement of prices, then...", people like you pose screenshots of charts in CNN and text news in LSTM. Maxim Dmitrievsky correctly characterized your"you hear the bell, but you don't know where it is" mentality. In MLP and even more so in forests there is no magical search for features, all stupid heuristics working faster than the "quasi-optimal" kNN and Parzen's window method, which essentially average the neighborhood around the point. Read Mitchell, write your own kNN, Parzen, MLP, forest.... In CNN in general, the convolution itself is a hardcoded pre-processing heuristic for locally correlated features, take the pictures and shuffle the pixels, one way, and that's it... CNN will nip right away, won't find anything, as everything works only in very narrow margins, nothing is "searchable" by itself.

If you stick a pure price in the chips, you will get no better than a usual story-optimized mashup.
 
Grail:

With all due respect (you get it...) but you very poorly understand how MLP works, at the level of obscurantism a la "If you put a program of unmanned driving to drive on the price movement, then...", people like you stick screenshots of charts in CNN and text news in LSTM. Maxim Dmitrievsky correctly characterized your"you hear the bell, but you don't know where it is" mentality. In MLP and even more so in forests there is no magical search for features, all stupid heuristics working faster than the "quasi-optimal" kNN and Parzen's window method, which essentially average the neighborhood around the point. Read Mitchell, write your own kNN, Parzen, MLP, forest.... In CNN in general, the convolution itself is a hardcoded pre-processing heuristic for locally correlated features, take the pictures and shuffle the pixels, one way, and that's it... CNN will fuck up right away, it won't find anything, because everything only works within very narrow boundaries, it doesn't "search" for anything by itself.

If you stick a pure price in the chips, you'll get no better than a normal story-optimized wizard.

I have nothing to say to you, because there is no question for discussion. Just remember: the surest way to be deceived is to think yourself smarter than others. (с)

Yes, I've had the system on NS for over a year now, without retraining. The very first, still trial, trades here in this thread. New system? - I'm in no hurry.

 

The activation functions should be thrown out when there is already prepared data on the input, so as not to lose signs to the final layer.

As long as you shove the data into the template HH you will not get the effect.

 

With all due respect (you know...), but, perhaps, I will support Asaulenko. Almost one and a half thousand pages, and the conclusion was born one: it is necessary to look for predictors.

Where do you get the conclusion? Well, it's very simple: people place candles in MO and get 50/50. If we attach indicators, the error will decrease. But, gentlemen, if you put a hypothetical Grail that always guesses, you will get 0 errors in the output of the MO. If you have selected a grail set of indicators, you don't need MO. You can do with simpler means.