"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 98

 
Maxim Romanov:

And how do you measure the degree of entropy on the data?

above is a link to my wiki and Alexander's on the Hubr

 
kapelmann:

I certainly didn't look through all of them, but all of those "with sources in MQL" all without neural networks themselves, but essentially an exercise in OOP in the form of wrappers to various libraries or to the same NeuroPro, frankly speaking after a dozen of such read articles, they all look the same, sometimes it even seems that OOP for stock robots is more harmful than helpful, IMHO OOP for projects from 100 thousand lines starts to show advantages, and when three functions wrap in five classes and even with inheritance is ridiculous.


PS: please do not teach me how to search the Internet, give specific links to the OPEN CODE of neural networks, not Vopers, not rewriting books and articles.

These two paragraphs contradict each other. Anyone who knows how to search the Internet (in particular on this site) will quickly find implementations of basic NS types in pure MQL without any dependencies and wrappers.

 

It is impossible to make a normal matrix in MQL without crutches. What kind of NS are we talking about with such reduced language capabilities?

Many people here can't even reproduce the MLP formula.

 

The returnees of the original series are taken, and the mean and variance are removed from them. On this basis, the returnees from the normal distribution are generated and it is restored to the original distribution.

Entropy was measured on both series. In the vast majority of cases it is the same, i.e. the quotes are SB.

And on returns the difference is as it should be (entropy is higher on the randoms):

There just doesn't seem to be enough sensitivity for the bare row. I wonder if your NSs have enough "sensitivity". Doubtful.

And here's a takeaway from Bitcoin (should be less efficient still, supposedly). And indeed.

Especially on H4.

 
Roffild:

It is impossible to make a normal matrix in MQL without crutches. What kind of NS are we talking about with such reduced language capabilities?

Many people here can't even reproduce the MLP formula.

Did you fall from the moon, or from the oven as a child?

 
Maxim Dmitrievsky:

The returnees of the original series are taken, and the mean and variance are removed from them. On this basis, the returnees from the normal distribution are generated and it is restored to the original distribution.

Entropy was measured on both series. In the vast majority of cases it is the same, i.e. the quotes are SB.

And on returns the difference is as it should be (entropy is higher on the randoms):

There just doesn't seem to be enough sensitivity for the bare row. I wonder if your NSs have enough "sensitivity". Doubtful.

And here's a takeaway from Bitcoin (should be less efficient still, supposedly). And indeed.

Especially on H4.

Nice one!

And entropy itself in sliding time windows, how does it behave?

Obviously, if we carry out studies with moving time windows of multiples of 1 hour (on one-minute data they are 60, 120, 180, ...), then we should identify those windows where entropy is minimal on average.

These are the samples to work with - I'm sure NS will find regularities there.

 
Грааль:

There is a ported Alglib (https://www.mql5.com/en/code/11077).

The joo initiative was doomed to infamy, and not because people aren't collaborative, but because it's a futile idea.

Okay, thank you, I'm looking into it.

Alexander_K:

Patamushta here is not a real leader with knowledge, so that the wards would not be going through options, and would listen to him like children.

When I read this thread, tears streamed down my senile cheeks - how they begged some pendos to take the helm, but he's clueless himself and .... That's it, we're screwed.

A shameful and instructive branch.

Slavs need a leader, father, stern but fair, it is in the genes, it is logical that Slavs feel it and ask other races (Anglo-Saxons, Jews, Arabs...) to command them, it makes sense, Slavs do not tolerate their leader, unless he is mystified to the level of prophet or anointed by God.

 
Maxim Dmitrievsky:

Later, he wrote a jpredictor standalone in java, with 2 neural networks (mlp and svm, more specifically) with automatic feature selection

As far as I know "jpredictor" still has the same one neuron trained by parameter optimization, the output of jpredictor is weights for one neuron, which is obviously not something to be proud of.

 
Maxim Dmitrievsky:

Entropy is measured on both rows. It is overwhelmingly the same, i.e. the quotes are SB.

the level of logic is astonishing ))
 
Alexander_K:

How does entropy itself behave in sliding time windows?

Obviously, if we carry out research with sliding time windows of multiples of 1 hour (60, 120, 180, ... on minutes), we should find those where entropy is minimal on average.

These are the samples to work with - I'm sure NS will find regularities there.

I'll see later, but I'm not happy with the picture.

for example, for a simple man in the street, earning on the eurodoll. is unrealistic.

And, by the way, none of the "smart guys" present here can prove the opposite :) that's all the remains of the 4th forum... ugh. And the NS libs were going to write.