Errors, bugs, questions - page 2377
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Yes. Any Print from OnInit
Thank you. Interesting, if I didn't happen to notice it, how would it be possible to find out about it...
ZS I'd leave it for local Agents only. In the Cloud, you can easily spam the log in this way.
Thank you. Interesting, if I didn't happen to notice it, how would it be possible to find out about it...
ZS I'd leave it for local Agents only. In the Cloud, you can easily spam the log in this way.
When you run genetics, do you optimise according to your custom criterion?
Based on the logs presented, OnTester returned 0 in all cases
I usually optimise according to my criterion, but here I tried the custom criterion as well. The result is the same.
OnTester returns 0, that's why it returns zeros in the results - that's understandable. The question is why it returns "0" on general run (optimization) but on single run from "zero results" (with the same parameters) it returns normal result, graph, etc.? I.e. something is not working in "Full overshoot" and yet genetics works fine. Any other thoughts/ideas?
Any other thoughts/ideas?
Pull all information of optimization pass in this way
Forum on trading, automated trading systems & strategy testing
MT5. STRATEGY TESTER. Divergence of testing and optimization results.
fxsaber, 2017.08.22 11:06
Insert these lines into the EA
And run Optimization. Then run a mismatched single run.
Then compare the saved two reports of the corresponding pass from Optimisation and the single pass.
The result of comparing the two reports will quickly reveal the causes.
The goal is to improve as much as possible what is done, I ask developers not to be offended by possible criticism.
1. I do not understand the reasons for such strong differences in "interfaces" for socket read functions there is .
2. The name of the function SocketIsReadable has nothing to do with what it actually performs:
In fact, SocketIsReadable is analogous to ioctlsocket() function with FIONREAD flag in Ws2_32.dll
3. How can a user using Socket* functionality over a non-encrypted connection get a response from a server with minimum time delay, if the server doesn't break the connection after data transfer?
Yeah, that's how it's done:4. SocketIsReadable returns false information.
Turn off Internet and execute above code.
As a result, SocketIsReadable returns a sane value of 1. Wonders.
I managed to describe about one third of all questions and problems related to Socket*.
Unfortunately, I needed a lot of time to check, describe and double-check everything... (so that it's not a fact that there will be a sequel)
The general impression is that either everything was done in a big hurry, or the Socket* functionality got to the junior developer.
In any case, the current solution is very crude and covers rather narrow approach of using sockets.
I usually optimise according to my own criteria, but here I tried the standard ones as well. The result is similar.
OnTester returns 0, that's why there are zeros in the results - that's understandable. The question is why it returns "0" on general run (optimization) but on single run from "zero results" (with the same parameters) it returns normal result, graph, etc.? I.e. something is not working in "Full overshoot" and yet genetics works fine. Any other thoughts/ideas?
Can you share an EA (ex5 in private) and optimization conditions?
We want to reproduce the problem you mentioned.
After research the EA will be irrevocably erased
Can you share the EA (ex5 in a private message) and the optimisation conditions?
We want to reproduce the problem you mentioned.
After research the EA will be irrevocably erased
Can you share the EA (ex5 in a private message) and the optimization conditions?
We want to reproduce the problem you mentioned.
EA will be irretrievably erased after research
Replyed in a private message.
Within the framework of getting acquainted with the Socket* functionality a number of questions to the current implementation arose.
The goal is to improve as much as possible what is done, I ask developers not to be offended by possible criticism.
1. I do not understand the reasons for such strong differences in "interfaces" for socket read functions there is .
2. The name of the function SocketIsReadable has nothing to do with what it actually performs:
In fact, SocketIsReadable is analogous to ioctlsocket() function with FIONREAD flag in Ws2_32.dll
3. How can a user using Socket* functionality over a non-encrypted connection get a response from a server with minimum time delay, if the server doesn't break the connection after data transfer?
Yeah, that's how it's done:4. SocketIsReadable returns false information.
Turn off the internet and execute the above code.
As a result, SocketIsReadable returns a sane value of 1. Wonders.
I managed to describe about one third of all questions and problems related to Socket*.
Unfortunately, I needed a lot of time to check, describe and double-check everything... (so that it's not a fact that there will be a sequel)
The general impression is that either everything was done in a big hurry, or the Socket* functionality got to the junior developer.
In any case, the current solution is very crude and covers rather narrow approach of using sockets.
1. This is the interface.
TLS functions are auxiliary to support complex cases. No problem with setting SocketTimeouts - these are the best ones to use.
2. It performs its function correctly.
You must not be aware of the problems with TCP connection breaking detection. It's quite difficult (resource intensive at the cost of extra calls) to detect that a connection is guaranteed to be broken correctly. All network implementations suffer from this problem.
Our implementation of SocketIsReadible is smart enough and has a break detector. When it detects a clean 0 bytes, it does the extra work of checking that the socket is complete:
Since it returns the number of bytes without a termination flag, it outputs 1 byte so that a subsequent/imminent SocketRead read attempt will normally return an error.
Why is this correct? Because most of the code is written by programmers in this way:
the actual result of the operation is checked on a direct read attempt.
3. it needs to do SocketIsReadible() before the actual read, if you don't know the exact size of the data to be read.
The SocketisReadible/SocketRead bind gives you the ability to not lose control (minimize to almost zero loss of control) over the execution flow of your program. This avoids flying into network timeouts.
Yes, a few lines more code, but you won't lose control for a millisecond (roughly). It's up to you to decide what to do in the intervals of no network data.
4. explained in the second paragraph.
Issuing 1 for the sake of read and output stimulation as a read error.
Your conclusions are wrong.
This is the nature of TCP/IP transport, where there are no guarantees at all. You can get into network black holes there too on filters/firewalls when there is no TCP signalling part. Raw timeout and data flow control allows you to detect them and terminate connections yourself.
We've given a raw/direct access interface to network functions, including TLS implementations. If you use them, you are the one who needs to properly wrap the raw functions in a secure/controlled SocketIsReadible/SocketRead handler.
If you want to make high-level requests without having to think about the minutiae, there are WebRequest functions. All the protections are built in there.