Searching for an arbitrary pattern using a neural network - page 5

 
Vladimir Simakov:
Peter. I take it that for you the term "mathematics" ends with its school course? So there's a lot more there, including algorithms.

Yes, I know maths within the school curriculum. I once asked a teacher in analytical geometry class (the one dealing with functions and coordinate axes), "If a function builds a curve on a graph, can a function be plotted from a curve on a graph?" and got the unequivocal answer, "No. It's impossible." From this, I concluded that patterns can be described mathematically but cannot be identified because you cannot get the formula that generated them from the values.

Maybe there are other mathematical tools. Tell me if you know.
 
 
Реter Konow:

Yes, I know maths within the school curriculum. I once asked a teacher in analytical geometry class (the one dealing with functions and coordinate axes), " If a function builds a curve on a graph, can a function be plotted from a curve on a graph?" and got an unambiguous answer - "No. It's impossible." From this, I concluded that patterns can be described mathematically but cannot be identified because it is impossible to derive a formula that generated them from their values.

Maybe there are some other mathematical tools. Tell me if you know any.

you can, at a guess, a tabular method of defining a function, interpolation

 
Igor Makanu:

you can, at a glance, use a tabular method to specify a function, interpolation

I could be wrong, but I think this is how neural networks work.

An array of data is sort of arranged inside a table, where each cell is a neuron that remembers one value. In the process of "learning" (reloading new data), values in cells are aggregated and reduced to a range. Eventually, each neuron remembers the range of values obtained in the data loading cycle and produces a "model" (a matrix with range values) which, as a template, is applied to the new data table and "recognition" occurs (if the data fits the ranges). Amateurishly stated, but that's the idea. I wonder what the experts have to say.

In this case, neural networks are ideal for pattern recognition.

 
Реter Konow:

I could be wrong, but I think this is how neural networks work.

An array of data is sort of laid out inside a table, where each cell is a neuron that remembers one value. In the process of "learning" (reloading new data), the values in the cells are aggregated and reduced to a range.

1. In the general case, the answer is no

2. as a special case, yes, but it depends on the NS type

1. NS is characterized not by "memorizing a neuron" but by changing its weight - the connection between neurons, all in all it is clearly written on the hubra and easy to readhttps://habr.com/ru/post/312450/.

2. these are most likely Hamming's networkshttps://habr.com/ru/sandbox/43916/

And if you've decided to get serious about it, you'll need to read at least one book (to understand that the next book will have 80% repetitions of the previous book)) ), and at least understand the difference between the classification task and regression for the NS - basically everything is built on it, the rest are variations on this theme and ways of learning and types of NS - I haven't studied deeply, many things that are repeated but trying to present as something very new calling it a new term ... a lot of confusion, a lot of noise ))))

 
Igor Makanu:

1. in general the answer is no

2. as a special case, yes, but it depends on the type of NS

1. NS is characterized not by "remembering a neuron" but by weight changes - communication between neurons, in general it is clearly written on hubra and easy to read https://habr.com/ru/post/312450/

2. these are most likely Hamming's networkshttps://habr.com/ru/sandbox/43916/

And if you've decided to get serious about it, you'll need to read at least one book (to understand that the next book will have 80% repetitions of the previous book)) ), and at least understand the difference between the classification task and regression for the NS - basically everything is built on it, the rest are variations on this theme and ways of learning and types of NS - I haven't studied deeply, many things that are repeated but trying to present as something very new calling it a new term ... a lot of confusion and noise ))))

Thanks, I liked the first article, but I don't understand why the network suddenly works this way. It describes everything simply, but it's not at all clear what it's all about. Just information without any real examples.

Weights, neurons, input and output, hidden, synapses... Values are necessarily between 1 and 0. Why is it like that and not like that?

How to train the network on data whose type is not double, and beyond the range of zero and one? How to declare a layer? How to set the number of neurons? Where to load the data?

In short, I haven't figured it out yet.
 
Реter Konow:

Thank you, I liked the first article, but I don't understand why the network suddenly works this way and not otherwise. It's simple, but it's not clear what it's all about. Just information without any real examples.

Weights, neurons, input and output, hidden, synapses... Values are necessarily between 1 and 0. Why exactly this and not otherwise?

How to train the network on the data which type is not double, and beyond the range of zero and one? How to declare a layer? How to set the number of neurons? Where to load data?

In short, I haven't figured it out yet.

google activation function and neural network normalization

examplehttps://www.mql5.com/ru/forum/5010#comment_329221 and it's also under alglibhttps://www.mql5.com/ru/forum/8265/page2#comment_333746

but you need to read a book anyway, trial and error is not an easy task.

 
Igor Makanu:

google activation function and neural network normalisation

examplehttps://www.mql5.com/ru/forum/5010#comment_329221 and the same under alglib https://www.mql5.com/ru/forum/8265/page2#comment_333746

But you still need some kind of book, you can't do it by rote.

Ok. I want to figure it out for myself, and then read the book. )

The article says that there are three uses for networks - Classification, Prediction and Recognition. Then it turns out that price patterns recognition should not involve OCHL data, but chart screenshots. Recognition with images works.

 
Реter Konow:

Then it turns out that price patterns recognition should not be based on OCHL data, but on chart screenshots. Recognition with images works.

hilarious! )))

what is a screenshot?

and what is OHLC?

in machine representation!

 
Igor Makanu:

(Laughs!) )))

what is a screen?

and what is OHLC?

in machine representation!

Well, the article separates the three applications of the networks. It's one thing to recognise from price data, it's another to recognise from colour data. Still, completely different approaches and mechanisms.

ZZZ. Price patterns are graphical in nature, not mathematical. If one tries to recognise them mathematically, one is stumped, but graphically it is easy.