Whether there is a process whose analysis of one part does not allow predicting the next part. - page 12

 
tara:

If you're firmly on the grid, look for a fast algorithm that precedes the slow one. Imho, of course :)

Uh... ah... ooo..... explain a little bit please. Sounds more than fundamental. But not very clear. Love the speed algorithms, willing to look for them, but don't understand where. Don't take that as a tease, I'm serious.
 
herhuman:
What do you think, is it the merit of the grid or the learning algorithm ?
Without each other, they are meaningless and useless.
 
tara:


Hello!

Life cannot be predicted in such a way as to make money on that prediction.

But: "You can't sell inspiration, but you can sell a manuscript" :)

If you are firmly planted on nets - look for a fast algorithm, which precedes a slow one. Imho, of course :)

I don't get it, sorry.
 
MetaDriver:
Erm... ahhh... ooo..... explain a little bit please. Sounds more than fundamental. But not very clear. Love the speed algorithms, willing to look it up, but don't understand where. Don't take that as a tease, I'm serious.


The principle of maximum freedom of choice for a fast algorithm. Each step to take, allowing maximum freedom to choose the direction of the next step.

First applied to electrical engineering (electrical machine theory). Here is the author: http://www.uni-dubna.ru/departments/sustainable_development/Portal/Nauch_trudy_kafedry/Osnov_trudy/Kron/

Not easy, of course ... but not very simple either :)

 
joo:


2. ГСЧ, как впрочем и СЧ, прогнозируется - зная предыдущий участок можно с определённой степенью точности прогнозировать значение на окончании следующего участка (числа не могут топтаться достаточно долго около одного значения - иначе это уже не будет являться рядом СЧ).

Mischek2:

Joke ? Provocation ? ))

Did it that way.

Took 10,000 randomly generated numbers: -1,0,1. It's like price went down/stays_in_place/increased.

For every -1,0,1 I take 3 more randomly generated numbers in range [-1.0;1.0] and feed it to neural network, respectively, network should produce three output signals -1,0,1. If the answer is correct, then +1, if incorrect, then -1 to the sum of the answers, the sum of answers is the FF for training, which, of course, I maximize.

This is for training. In case of virtual trading, if the array gives -1 or 1, I win on buying or selling, respectively, but if the array gives 0, I win 0.

So I've added a kind of "Zero", only it's a zero on my side. Bottom line - positive MO on random series prediction!

Looking at the bold blue line - as you can see, it's creeping up! - it's a virtual balance increase, if a loss, then -1, if a profit, then 1, if 0, then 0.

Anyone can repeat this hocus pocus. And, accordingly, apply it in real trading. :)

 
joo:

I did this.

Took 10,000 randomly generated

Maybe pseudo randomly generated would be more correct.

Is that how it works on the quotes too?

Looks like the grid remembered the random generator table.

 
herhuman:

1. Pseudo randomly generated would probably be more correct.

2. Is that how it works on quotes too?

3. The grid seems to remember the random generator table.

1. Yes. Correct.

2. It gets even better. Try it, comrades, try it. The only aggravating factor for the MO is the spread, so you can't do that on the minutes. Starting from M10 it's fine.

3. 7 hidden neurons have memorized/"learned" the random table? - So the grid has internalized the generator algorithm? - it is impossible. The oscillator period lies well beyond 10000.

 
joo:

1. Yes. Correct.

2. Even better. Try it, comrades, try it. The only aggravating factor in the MO is the spread, that is why you will not be able to make such waves on minutes. From M10 onwards it will do.

3. 7 hidden neurons have memorized/"learnt" the random table? - So the grid has internalized the generator's algorithm? - it is impossible. Generator period is way beyond 10000.


This is an interesting result. Could you elaborate on the grid.

1. How many layers are there? I understood that there is only one hidden layer with 7 neurons. Is that correct?

2. What are the outputs of these neurons being fed? -1, 0, +1 changes in price?

3. How many inputs?

4. What is the transfer function of the hidden neuron?

5. How is the network trained? By genetics?

6. How many bars in the trained sample?

7. The result shown is what - a forward test?

8. How were the random numbers generated? Mersenne twister mt19937? If not, then try this generator and definitely a forward test. It will be very interesting to compare results.

 
gpwr:


Interesting result. Could you please elaborate on the net.

1. How many layers? I understand there is only one hidden layer with 7 neurons. Is that correct?

2. What are the outputs of these neurons being fed? -1, 0, +1 changes in price?

3. How many inputs?

4. What is the transfer function of the hidden neuron?

5. How is the network trained? By genetics?

6. How many bars in the trained sample?

7. The result shown is what - a forward test?

8. How were the random numbers generated? Mersenne twister mt19937? If not, then try this generator and definitely a forward test. It will be very interesting to compare results.

1. 3 layers, 3-7-1: 3 input neurons, 7 in latent, 1 in output.

2. The input is 3 PCF numbers in the range -1.0 ... 1.0

3. 3.

4. The sigmoid is symmetrical with respect to 0.

5. Genetics of its own.

6. 10000.

7. Plot. But, as many people here claim, random cannot be predicted, can't be trained, and can't do shit with it. Since it is so, it makes no difference whether it is oos or samples. But for the sake of fun we may create oos.

8. 8. The IFs were generated by MT5 built-in generator (they say it is native C++).

Anyway, my point is that real prices contain more information than VARs. Which means only one thing.


ZZY Here's OOS 10000 "bars" for the future.


ZZZY I should make a remark - I demand to recognize my methods as anti-scientific and also warn of the high risks in trading on the financial markets.

ZZZI The previous ZZZI is not addressed to Vladimir.

 
joo:

7. Sample plot. But, as many people here claim, random cannot be predicted, cannot be trained, and cannot be done with anything at all. And if this is so, then oos or sample makes no difference. But, for fun, you can do oos.

Who says that it's so deep? Of course, NS can memorize random data. Is it a prediction?