How do you work with neural networks? - page 3

 
I understand there are no sources in the public domain. Are they available only on a case-by-case basis? If you don't mind, please drop them at the above email address.
 

"Blessed is he who believes, warmth to him in the world"...

 

"Блажен, кто верует, тепло тому на свете"...

Man, three pages into the thread and I haven't gotten a single answer to my questions. How hard can it be to help. I'm asking and asking for the real thing. Thank you, community.

 
sayfuji:

"Blessed is he who believes, warmth to him in the world"...

Man, three pages into the thread and I still haven't gotten a single answer to my questions. How hard can it be to help? I'm asking and asking for the real thing. Thank you, community.

You can not deny it - know how to support the topic without saying a word on the merits.

 

Sayfuji, you should at least do some research. There are many threads about neural networks here.

You may also want to check out here, he is also a very famous and respected visitor to this forum

http://fxreal.ru/forums/index.php

 

I have approached the question responsibly, but the respected LeoV kept the conversation going, but did not answer the essence of the question. He lived on the alp...ri forum for some time, so in his knowledge and skills I do not doubt, so I waited for his answer, but it was not there.

PS Prival, it's a really good site, I've been using it for a few months now. klot well done.

 
I may have been misunderstood. On the merits (hereinafter my purely private opinion);

1) it is the idea, not the means of its software realization, which matters the most. The idea is well described in the above excerpt from St. Lem;
2) Nobody will give sources of a really working network for nothing, most likely;
3) about ready-made neural network programs: it is impossible to create a "universal theory of everything", so no one is free from independent implementation of one's ideas, if the goal is to develop something workable. For this reason, even Matlab with its mighty toolkit, for example, did not satisfy me. NSDT is, of course, nowhere near Matlab's level.

Example.

I should say right away that I consider all kinds of forecasts of the price itself, especially down to the third or fourth decimal place, to be an inherently meaningless exercise. Such constructions, in my opinion, are nothing more than self-deception. Instead of it, as someone suggested in one of local threads, you can try to make an early detection of the price movement when it will pass not less than a predetermined number of points. This number can be determined based on analysis of previous price behavior (I think Composter solved this problem when he defined a trend/flot).

A working hypothesis: some strong price movements have some reproducible "precursors". We can try to teach the network to recognise these "forerunners" while working "from the market".

The design of a network ("crystal" in St. Lem's terminology as the basic structural unit of a large network, i.e. "cloud") in general terms:


- A multi-layer auto-adaptive compression Oya network with a single output with the possibility of selecting the type and parameters of the transfer function of the input, intermediate and output layers. Such a network can perform the functions of adaptive memory and classification of input vectors simultaneously;

- the number of layers is determined by the dimensionality of the input vector, i.e. the network is generated automatically by defining and initializing/reading work arrays;

- the number of neurons in the hidden layer decreases progressively with increasing layer number N according to the law 1/(2^N) ("solving crystals") or 1/N ("remembering crystals");

- the nonlinearity parameter in the hidden layer may depend on the layer number;

- there is a switchable internal feedback mode and a switchable external input to communicate with other 'crystals' to form a 'cloud'.

One of the most important and subtle points is the formation of the input vector. So far, just for testing and control of the network functioning, it is formed in the conventional way: y[] = (x[] - mean(x[])) / sigma(x[]). (This part of the problem is not yet completely solved.)

The "learning" of the network is done post factum by the heuristic rule: after the price has passed the specified number of points, the order to adjust weights by the shifted back i.e. "previous" input vector is given; thus the network "remembers" it, taking into account the previous accumulated information. It is assumed that the network trained in this way will recognise precursors and thus be able to give trading signals in real time. In the meantime, "the individual crystal was not so much flying as bouncing..." (see ibid.).

Interpretation of the output and automatic formation of the "cloud" itself, i.e. the neuro-committee, has not yet been implemented. There are no particularly beautiful pictures yet either.

Personally I find this approach promising. Once again - all of the above is my purely private opinion.
 
sayfuji:

I have approached the question responsibly, but the respected LeoV kept the conversation going, but did not answer the essence of the question. He has lived on the alp...ri forum for some time, so I don't doubt his knowledge and skills, so I waited for his answer, but it didn't turn out that way.

PS Prival, it's a really good thread, I've been using it for a few months now. klot well done.

And what are you actually dissatisfied? Leov actually answered your original subgraphic question, although you are trying to argue the opposite. And the fact that he did not share sources and refine other details, that's not part of his duties.


Ask a cheeky question, as some forum users do, like: "Show me the source code of a super-profiled neural network" and you'll get quite adequate answers.

 
Yuri, unfortunately (or luckily?), I'm not very good at taxation. But never mind. Thanks alexjou for the chewed up answer. I have not got any delusions, but I am interested in the Oya grid. I would like to ask you where you can read it.
 
"Oya net" is just a free speech shorthand for "Oya-adjusted weights net". The Oya rule itself is a modification of the Hebb rule, excluding infinite growth of weights by their autonormalization in the process of adjustment; in this case the ends of weight vectors are located approximately within a unit hypersphere. See, for example, here: A.A. Ezhov, S.V. Shumsky. "Neurocomputing and its Applications in Economics and Business". Moscow, 1998 (you can find the lectures in pdf format on the Internet). Also a very good book, though somewhat difficult for a beginner: Stanislav Osovsky. "Neural Networks for Information Processing. Finance and statistics, 2002 (available on the Internet in djvu format). There is a lot of other literature on networks on the Internet.