"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 94

 
Maxim Dmitrievsky:

I mean what you wrote "share your knowledge" what knowledge? There are plenty of books and articles on MoD. What other knowledge could there be?

I wasn't even there, just reading old posts, there were all sorts of Privals, scientist philosophers and someone else

I warned you that it would be misunderstood. Here it is - it didn't take long.

 
Dmitry Fedoseev:

The only carrier of which is exclusively you and no one else?

No, I am not a carrier of anything, moreover, I am stupid. That's why it's so easy for me to assess information in a non-judgmental way

 
Maxim Dmitrievsky:

What's that got to do with the idol? He was simply appealing for understanding and discretion. There's a lot of intelligence there, because it's impossible for a three-penny man to write his own separate ML program.

Now what... there's alglib. There is a neural network and forest and regression. Do you need anything else? I agree, the neural network's a bit overthinking. But the forest is fine. It's up for debate, but nobody wants it. That's why no one even knows how to use it.

What? What Reshetov did, does it take brains to do it? All it takes is not being afraid to embarrass yourself. That's a hell of a network... with a coefficient fitting in the optimizer. Isn't that funny to you?

A neural network. That's what I am writing about, that experts do not pull here to develop a convenient method of neural network application, which could be taken and used for any task in 5 minutes (and it is possible). When first started with neural networks, there was a complete schizophrenia... One required GUI right away, another one wanted to randomly enable/disable connections, then some clown appeared with some echo-network - demonstrated funny pictures... That's the level at which it's all going.

Woods. If someone had written a proper article, everyone would have understood and everything would have developed. But everyone here is just good at gossip.

 
Maxim Dmitrievsky:

No, I'm not a carrier of anything, moreover, I'm stupid. That is why it is so easy for me to assess information in a relaxed and unbiased way.

And you don't need to understand anything, the main thing is to have a lot of show-offs and bigger pictures.

 
Dmitry Fedoseev:

What? What Reshetov did, does it take brains to do that? All it takes is not being afraid to embarrass yourself. That's a hell of a network... with a coefficient fitting in the optimizer. Isn't that funny to you?

A neural network. That's what I am writing about, that experts do not pull here to develop a convenient method of neural network application, which could be taken and used for any task for 5 minutes (and it is possible). When first started with neural networks, there was a complete schizophrenia... One required GUI right away, another one wanted to randomly enable/disable connections, then some clown appeared with some echo-network - demonstrated funny pictures... That's the level at which it's all going.

Woods. If someone had written a proper article, everyone would have understood and everything would have developed. But everyone here is just good at gossip.

Later he wrote studio elon jpredictor in java, 2 neural networks (mlp and svm, to be specific) with automatic feature selection.

Do you know what's wrong with mine? train on artificial data in 1 minute and get unrestrained profits on artificial data. You can't do that in the real market, of course. Hence the tale that further work goes beyond understanding ML, but is within understanding patterns in the market

You don't know anything yourself but you write
 
Maxim Dmitrievsky:

Later, he wrote a jpredictor stand elon in java, with 2 neural networks (mlp and svm, more specifically) with automatic feature selection

What's wrong with mine? train it on artificial data in 1 minute and get unrestrained profits on artificial data. You can't do that in the real market, of course. Hence the tale that further work goes beyond understanding ML, but is within understanding patterns in the market

You don't know anything yourself, but you write

Haven't seen what he wrote there, but I imagine seeing the rest of his work.

Which one is yours and where is it? Is it the one through a bunch of libraries? I'm sorry, but it's hard to figure out an example there, not that you can use it all in your work.

Oh, it's the normal one... I've already explained it twice, and am explaining it for the third time - to be easily and quickly applied to solve any problem arising when writing an EA... ...so as not to get bogged down with an Expert Advisor algorithm at all... and it is possible. It is sad, though, that some people cannot even dream of it.

***

Many things I don't know, but nevertheless, back in the day I had the code for a universal multilayer perceptron trained by backward error propagation. And I don't bother with modern MO methods, because it sounds more like a cult than something serious.

 
Dmitry Fedoseev:

Haven't seen what he's written there, but I imagine seeing the rest of his work.

What's yours and where is it? Is it the one through a bunch of libraries? Sorry, but there's no way to figure out an example there, not that you can use it all in your work.

Oh, it's so normal... I've already explained it twice, and am explaining it for the third time - to be easily and quickly applied to solve any problem arising when writing an EA... ...so as not to get bogged down with an Expert Advisor algorithm at all... and it is possible. It is sad, though, that some people cannot even dream of it.

***

Many things I don't know, but nevertheless, back in the day I had the code of a universal multilayer perceptron trained by backward error propagation.

Its jpredictor was repeatedly discussed in the MO thread. Of course, no one is talking about the neuron in mql.

Mine doesn't have a bunch of libs, just a forest of alglib. Everything is very simple there, it's impossible to think of anything simpler. You can use any indicators and train them almost instantly. So, you don't have to bother with it at all.

In addition, in a separate library there is an example of training on artificial data (fie Weierstrass), and it is clear that on any data with patterns (periodic cycles) this library works as a grail. There are no such cycles on the market, and finding them and isolating them is beyond the scope of ML.

 
What is there to talk about? When the genetic algorithms contest was held, 90% of the forum got hysterical, like that's possible, it turned out that no one knows what it is, let alone having their own implementation, or at least trying to make one... but at least how much fanfare everyone has!
 
Maxim Dmitrievsky:

In the MO thread its jpredictor has been discussed many times. Of course, no one is talking about the neuron that is on mql.

mine doesn't have a bunch of libs, just a forest of alglib. Everything is very simple there, it couldn't be simpler. You can use any indicators and train them almost instantly. So, you don't have to bother with it at all.

In addition, in a separate library there is an example of training on artificial data (fie Weierstrass), and it is clear that on any data with patterns (periodic cycles) this library works as a grail. There are no such cycles on the market, and finding them and isolating them is beyond the scope of ML.

Here's the real question then - why aren't people using it? Maybe there's something wrong with it? In terms of usability... the clarity of the instructions... I don't have the slightest desire to ask, because I know that everything will be so complicated, that I won't be able to figure out where to put it all.

 
TheXpert:

Description of a neuron and a layer:


My interpretation is slightly different from the biological view

The neuron itself is a simple transducer of input x to output y. The neuron in my scheme has no synapses. There is only input (x), output (y), error (e), and threshold (t). Error is an intrinsic property of the neuron, necessary for learning. Optionally, it can be used for visualization by iterations.

Identical neurons can be combined into a layer. A layer is a set (vector) of identical neurons.

The inputs and outputs of neurons form the input and output buffer of the layer. A buffer is a separate entity used to link neurons to synapses and is a spacer to simplify the communication scheme.

Combining neurons allows a transition to vector maths in many cases, and often simplifies and speeds up the operation.

The layer consists of at least one neuron.

...

Asking experts. What is a "neuron" in the programmatic sense? How similar is it to a normal function? There is also input, conversion and output of values. What is the difference between a "neuron" and a function?