Machine learning in trading: theory, models, practice and algo-trading - page 2269

 
Maxim Dmitrievsky:

I wrote my gan, there's nothing complicated there. It's not recursive though, I'll have to redo it.

Example on Torche.

here is another example

When I'll have enough time, I'll try to figure it out.

 
Rorschach:

I'll try to figure it out when I have time.

I did a comparison of different generative models from the library above, through my lib. It turns out that GMM works better for tabular data (dataframe with increments). Then come copulas, the second most efficient. Neural network models like tabular Gan and others worked worse. But maybe I did something wrong. There's also this option.

 
Maxim Dmitrievsky:

I did a comparison of different generative models from the library above, through my lib. It turns out that GMM works better for tabular data (dataframe with increments). Then come copulas, the second most efficient. Neural network models like tabular Gan and others worked worse. But maybe I did something wrong. There's also this option.

The networks seem to have poor noise tolerance, maybe that's why the results are worse.

I wanted to add noise to the data at every epoch, but I never got around to it.

 
Rorschach:

The networks seem to be poorly tolerant of noise, which is probably why the results are worse.

I wanted to add noise to the data at each epoch, but never got around to it.

It looks like they are averaging very hard. The output is similar samples, with weak scatter. No matter how I change the latent vector, I get too close values.

 
Maxim Dmitrievsky:

as if they were averaging very hard. At the output you get similar samples, a weak scatter. No matter how you change the latent vector, you get too close values.

How about reducing the depth of history?

 
Rorschach:

Maybe the depth of the history can be reduced?

I did different, the output of both autoencoder and gmm gives strongly averaged values. If the autoencoder by definition compresses, it is unclear why the GANs. Dropout doesn't help either, it seems.

 
Maxim Dmitrievsky:

I made different ones, the output of both autoencoder and gmm gives strongly averaged values. If the autoencoder by definition compresses, it is unclear why the GANs. Dropout does not save it either, it seems.

Averaging and blurring are roughly the same thing, right? I found this article .

 
Rorschach:

Averaging and blurring are roughly the same thing, right? I found this article .

Yeah, compression of information.

I understand with numbers, but tabular data is worse.

That's why there's TabularGAN. In the package above.

 
Maxim Dmitrievsky:

Well, yes, information compression.

I understand it with figures, but it works worse with tabular data

I read the diagonal, it seems to be about a different noise distribution and unsuitable metrics.

It is better to check with test data under greenhouse conditions.
 
Rorschach:

An interesting topic is grid reversal.

Put noise on the inputs. Get the spectrum at the output.

https://arxiv.org/pdf/1806.08734.pdf

Reason: