What to feed to the input of the neural network? Your ideas... - page 18

 
mytarmailS #:
What to feed to the input of the neural network? Your ideas...

What prevents you from making a large automatic search of many variants?

I think this is the best option, experimentation is the criterion of truth, if you make the experimentation automatic, it's a fairy tale.

It all depends on strength, time, skill, knowledge, etc.

I have free time, I sit as a hobby, spin some simple and understandable developments.

 
Ivan Butko #:

It all comes down to strength, time, skill, knowledge, etc.

I have free time, I sit as a hobby, spin some simple and understandable developments.

As far as I remember you write in C++

Can you train models in C++ and not in that programme?

If yes, then you need only one library that will solve all your problems.

 
mytarmailS #:

As I recall, you write in C++

Can you train models in C++ and not in that program?

If yes, then you need only one library that will solve all your problems.

Nah, you got me confused with someone else. My level is low.

I try to implement curious ideas on procedural mql, and on Python with the help of chat and a wardrobe of mat, while you explain to him what I want to try and where endless errors come from.

Well, by the way, chat helped me to start tensorflow, and to connect my graphics card to work, and CNN and LSTM and so on. And Q-learning. Nothing really works, I shoved everything I could on the input.

From the curious remained DQN, judging by the description of this technology - what you need to emulate the work of a trader. But the chat has so much code, and even non-working, that I can't even understand and correct it.

Computer vision is also an interesting idea as an emulation of a trader's work, but a participant tried it in his hands and couldn't do anything. I will probably never use this thing in my hands again

 
Ivan Butko #:

No, you have me confused with someone else. I have a low level.

I am trying to implement curious ideas on procedural mql, and on Python with the help of chat and a wardrobe of mat, while you explain to him what I want to try and where endless errors come from.

Well, by the way, the chat helped to start and tensorfloat, and to connect the graphics card to work, and CNN and LSTM and so on. And Q-learning. Nothing really works, I shoved everything I could on the input.

From the curious remained DQN, judging by the description of this technology - what you need to emulate the work of a trader. But the chat has so much code, and even non-working, that I can't even understand and correct it.

Computer vision is also an interesting idea as an emulation of a trader's work, but a participant tried it in his hands and couldn't do anything. I will probably never use this thing in my hands again

It's all tools. Hammers won't hammer nails themselves, no matter how you teach them. Anything can be fed into the input. Anything that's relevant to this BP. It's not very fundamental.

You don't demand ready-made TCs from an MT5 optimiser. It is the same with neural networks. There is no sense in overfitting. It makes sense to study the theory of machine learning and matstat.

And this is quite extensive and complex material.
 
Ivan Butko #:

Nah, you've got me confused with someone else. I'm low level.

I'm confused.

I would recommend you to forget about mql while you are in the search phase....

You need a language for rapid prototyping with the right tools.

Then in a couple of days of learning and a couple of days of coding you will be able to run an algorithm that will create and search for ideas and then you will be able to test more ideas in one day than in a lifetime of coding in mql.

 

I forgot to say that the probability of closing a trade in plus is not the main thing. The main thing is what SL/TP should be. And this is the main thing for neuronics. In short, there are more questions than answers. But one thing is clear SL=TP.

 
but if there is a spread, the picture changes - and the stop is closer than the take.... yeah, that's something to consider...
 
Sergey Pavlov #:

I forgot to say that the probability of closing a trade in plus is not the main thing. The main thing is what SL/TP should be. And this is the main thing for neuronics. In short, there are more questions than answers. But one thing is clear SL=TP.

What if TP=10, SL=20, and the probability of closing a deal in plus is 99%?
 
Maxim Dmitrievsky #:
They're all tools. Hammers won't hammer nails themselves no matter how much you teach them. You can input anything you want. Anything that's relevant to this BP. It's not very fundamental.

What if you give a hammer to a robot?

There's a Japanese robot, let's say. It already plays basketball in Tokyo, communicates with little Japanese kids, has learnt to walk, answer, be smart, etc. He sees, hears, reacts.

And then they give him a hammer. He takes it and drives nails with it.

....

And here they put him in a soft computer chair at a computer table with three monitors and say: "There, see? - There's all sorts of graphs. You've got to make money on them! Figure it out."

And so, with the participation of specialists who taught him everything, he begins to learn how to trade: poke buttons and so on. And since he is also a language model, he also learns from publicly available thematic articles, information about techanalysis, price action, etc.

In the end, everything comes down to reinforcement learning. Work on forex turns into a game, where the robot will trade for a long time on a demo in a manual tester, drawing all sorts of zones, marking peaks and so on. And, improve the skill.


This is where I am curious about DQN, because the machine will learn to trade, systematically, without memorising the price path. Like a beginner who is told in a course "close the right side of the chart and analyse on history".

 
Dmytryi Nazarchuk #:
And if TP=10, SL=20, and the probability of closing a trade in plus is 99%?

Firstly, there is a skewed probability of 10/20 and neuronka does not give 99% (at best from 50-70%). 99 or 100 only when averaging. And we exclude that.