Machine learning in trading: theory, models, practice and algo-trading - page 2278

 
mytarmailS:

As far as I remember TC worked a little bit and died...

Filtration in the usual sense (wizards, filters, etc.) is always a delay, a delay in the market is plum....

I should build another paradigm (without delays), like levels...

What does delay have to do with it? it's the same overfit, what difference does it make how to teach it?

you need to look for a pattern first

 
mytarmailS:

What are you waiting for?

I have these ideas a wagon and a wagon, in line.

The file gives an example for 2 microphones, and another idea to use several currencies for the same purpose.

I also need to look at the blind adaptation.



Visualization of the loss function

 
Maxim Dmitrievsky:

What does the delay have to do with it? it's the same overfit, what difference does it make how to teach

You have to look for a pattern of furst.

What's overfeeding in the wrecker? Do you even read what I'm writing?

 

Is there something in alglibe to compress and decompress the graphs?

I see several for interpolation. Which one will work best for us? And which one is faster?

We've given up on the idea...

We found something to compress and decompress graphs with. What's next? How do I use it?

1) Recognize a dozen uncompressed current situations? Then what? Average? After all, maybe 50% say buy, and 50% say sell.

2) In training, is there any way to use this? Can we increase the size of the array for training?

 
elibrarius:

The idea was abandoned...

It didn't work for me...

expanded - narrowed to x10 times

garbage


there's another way... don't fight invariance but compress the dimensionality

or forget it)

 
mytarmailS:

What is the overfit in a mashka? Do you even read what I write?

use your brain.)

your neural network's approach to go through periods of time is a simple overfit
 
mytarmailS:

It didn't work for me...

expanded - narrowed to x10 times

garbage


there is another way... Don't fight invariance but compress the dimensionality

or forget it)

10 times is too much.

I think it should not be more than 50%. You may try, for instance, 1.1, 1.3, 1.5 times.


If your code is ready and you only need to change a multiplier - check these options

 
mytarmailS:

It didn't work for me...

expanded - narrowed to x10 times

garbage


there is another way... Don't fight invariance but compress the dimensionality

or forget it)

Have you tried item 1? I.e., did you feed several variants of the scaled current situation into the model when you made the forecast?
 
Maxim Dmitrievsky:

(turn your brain on.)

your approach of overfitting periods with a neural network is a simple overfit

I didn't turn it off...

The network controls the waveform period, the period is from 2 to 500 I think...

period from2 to 500 equals lagfrom 2 to 500

Whether or not the network is overfitted is not the point... The point is it controls the period and period == lag

elibrarius:
Have you tried step 1? I.e. when forecasting, did you feed multiple versions of the scaled current situation into the model?

Yes

 

I'm very interested in this SPADE algorithm, but I don't know how to approach it yet, it's been running around in my head for half a year now...

It's not very obvious how to preprocess data for it, the same with the target + it's extremely resource-hungry, it's definitely not a "big-data" algorithm...

But it seems to me that this is the best algorithm for data-mining market