How do you work with neural networks? - page 4

 
Thank you very much as a human being!
 
alexjou:

- The number of layers is determined by the dimensionality of the input vector, i.e. the network is generated automatically by defining and initializing/reading work arrays;

- the number of neurons in a hidden layer progressively decreases with increasing layer number N according to the law 1/(2^N) ("solving crystals") or 1/N ("remembering crystals");


It seems to be proven that three layers are sufficient to approximate any function. You seem to assume that there are more of them. From what considerations?


In general the conception is very close to my thoughts, though in terms of acquaintance with NS I am probably at considerably earlier stage. It would be interesting to communicate. Can you give me your e-mail address here? Or write to me at likh on yandex ru.

 
The problem here is slightly different.

It is not about function approximation. I know nothing about the proof mentioned by you (as I never needed to work with projective networks), but general considerations suggest that at approximation of arbitrary functions the type of basic functions and dimension of the basis play much bigger role, than the "layering" of the network; however, this is true for the projective methods in general.

I chose the specified configuration of a network because it is probably the way the brain of living beings is built and the way it "learns" by adaptive memorizing and classifying the input information. (I pestered my fellow biologists and doctors almost to death with my stupid questions, but they could not say anything definite except "why do you need it" and "cut it up and try it yourself".) Hence the choice of Oia's rule for tweaking the scales -- in this case one cannot say for sure whether "learning with a teacher" or "without a teacher" is taking place (I have always found such a division of notions to be excessively artificial for some reason). Interestingly, starting from a certain moment of the scales adjustment, such a network stops being predictable, in other words, starts "behaving", though we are talking about only one such "crystal" so far.

In short, the network, together with the method of adjusting its scales, was built almost entirely from heuristic considerations. After all, at the exchange, which we are all trying to defeat, such considerations, most likely, play far from the last role. Email: alex-jou Hund rambler Punkt ru (just a very big request - don't add it to your contact list to avoid spam. In general and I ask everybody to do it - the usefulness of this service is almost zero and the damage is tremendous).
 
Candid:

It seems to be proven that three layers are sufficient to approximate any function. You seem to assume that there are more of them. From what considerations?

This proof does not include the sigmoid that it mentions. Therefore it is only theoretically possible to approximate any continuous function by a three-layer perceptron. In the applied domain, unfortunately, the results are much worse.
 
Does anyone know anything about the package PolyAnalyst?
 
sayfuji:
My question is this. How do you work with neural networks: do you implement using only mql4 means (the same Artifical Intelligence), use programs like MatLab, or special neuro packs (Neuro Shell Day Trader, NeuroSolutions, etc.), attaching a dll to the EA's code. What is your approach and what are its advantages over others (apart from profitability)?

There are advantages and disadvantages to each approach:

1. When developing your own neural network, you are not dealing with a black box, unless of course you just took the source code and compiled it, but somewhere you have added something specific to the task.

2. When you buy a proprietary grid, you get: a black box for the money, but along with it: support, ready-made solutions and proprietary input whiteners.


Simply put, if, for example, a universal time series prediction package is purchased, the user doesn't have to think about what's being fed into the inputs. Because in the package, the inputs are the time series as it is. And in the package itself, it is preliminarily prepared for feeding to the inputs of the network, i.e. before the grid is started for training, whitening will take place:

1. Filtering and smoothing the input data to make it more predictable and less noisy

2. Normalising

3. Filtering and elimination of less significant inputs

4. Eliminating correlations between inputs

5. Eliminating linearity from the input data

6. Selection of adequate sigmoids for normalised data

And so on and so forth.


After that, the network is trained. Then the output data is recovered. For example, if linearity was eliminated at the inputs, the package will restore it at the outputs. Crap at the inputs turns into sweet at the outputs. It is quite possible, since we are dealing with a black box, that instead of neural network the package will use genetic algorithm or maybe some regression or other extrapolation method.

-+++------------------------------------------------------------------------------------------------------------------------------------------+++-

- So for those users who are going to dabble in neural network but don't understand the term "normalization", it's better to get a universal package for forecasting time series where the maximum you can ask for besides the time series (quotes) is lag size.

- More advanced users who know peculiarities of neural network architectures, i.e. have studied advantages and disadvantages, should buy more specialised packages. Such packages are not suitable for dabbling, because you have to find the right architecture for the given task by yourself. And "scientific" method like adding stuff to inputs is no good here because outputs most probably will be also full of crap.

- All else, i.e. networks assembling from unauthorized or written from scratch sources is suitable only for those who have real experience in input data preparation before training the network and data recovery at outputs after training.

I.e. neural network package selection principle is obscenely simple: if you can't take a dump - don't torture your anus. If you have bought a cool package and you have questions on its use which you are not able to answer yourself with the help of included manuals, it means only one thing: do not walk like that, i.e. buy something less cool for the more stupid.

 

The problem with neural networks is the same as with other TC that do not use neural networks - a neural network will always find a pattern at any given time interval (training or optimization), then there is the same question - will this pattern work (bring profit) in the future?

 

Reshetov:

Shit on the inputs turns into candy on the outputs.

Never.

It is quite possible, since we are dealing with a black box, that instead of a neural network in a proprietary package a genetic algorithm will be used, or maybe some regression or some other extrapolation method.

How does GA relate to NS and regression?

NS is a method.

GA is a method.

"Use GA instead of NS" sounds crazy. It's like "replace the heart with an exhaust gas analyzer."

I'm sorry. (chuckles)

 
LeoV:

....... Then the same question arises - will this found pattern work (bring profit) in the future?

Suppose, purely hypothetically, that a way will be found, or has already been found, to answer this question - "No". Moreover, for any TC. What conclusion can be drawn from this?

Traders will stop trading? Just curious though.

ZS. Will traders buy reliable information confirming that the answer is "No"? Or would they prefer not to know the answer to that question? (rhetoric, if anything)

 
joo:

Assume, purely hypothetically, that a way will be found, or has been found, to answer this question - "No". Moreover, for any TC. What conclusion would you draw from this?

Traders will stop trading? Just curious though.

ZS. Will traders buy reliable information confirming that the answer is "No"? Or would they prefer not to know the answer to that question? (rhetoric, if anything).

Pure scholasticism, if anything.