Errors, bugs, questions - page 2285

 
Alexey Navoykov:

Yes, but the notion of "fast" in your case is very relative. It is one thing for a user to request an array of bars - he is just copied an area of memory, or requested a specific timeseries, then it is also a simple copying of data with constant step, equal to the size of the structure. And another thing is additional calculations and conversions over each number.

Although, personally, I would prefer to have a compressed history, so as not to waste memory, because I'm organizing my own arrays for storing it anyway. So I'm willing to tolerate a little delay. But most other users would tear you to pieces for it )

p.s. Although ideally, it would be nice to have such an option in the terminal, to choose mode of storing history in memory. For example, if the system has little RAM, but a fast processor, it would be very useful.

Well look. I just measured write and read speeds on my SDD. It turned out that write and read time of 8 bytes (1 type of value double or datetime or long) ~ 48 ns. And according to my calculations read time of 8 bytes of information from packed array is 1-2 ns. Thus, while we lose 1-2 ns on access to a struct element, we gain 48*0.8 = 38 ns for writing and reading from disk. Moreover we have 5-fold RAM and disk space saving. I'm even silent about HDD.

 
Nikolai Semko:

Well look. I just measured read and write speed on my SDD. It turns out that write and read time of 8 bytes (1 type of value double or datetime or long) ~ 48 ns. And according to my calculations read time of 8 bytes of information from packed array is 1-2 ns. Thus, while we lose 1-2 ns on access to a struct element, we gain 48*0.8 = 38 ns for writing and reading from disk. Moreover we have 5-fold memory and disk space saving. I'm not even speaking about HDD.

I don't argue with it. When it comes specifically to downloading data from the disk, you are certainly right. 4 years ago we had a discussion with Renat on this topic, at the time when SSD was still quite uncommon and the vast majority of users were sitting on slow HDD.So I (with example of my SSD) tried to convince him that HDD operation is the slowest link in the system and we should try to minimize the amount of data consumed from it, not vice versa. But he had such arguments: no need to overload CPU with extra work, and you all are fools, do not understand anything, etc. In general, everything as usual )

True, SSD significantly accelerated these days.

It turns out that write and read time is 8 bytes

But why should writing be measured together with reading? Data is supposed to be written once when receiving from server, well, or creating a cache. And then only reading. Therefore cutlets separately, flies separately.
 

Forum on trading, automated trading systems and trading strategy testing

Bugs, bugs, questions

fxsaber, 2018.09.10 21:28

First, the Optimization log.

Tester  optimization finished, total passes 714240
Statistics      optimization done in 7 hours 31 minutes 06 seconds
Statistics      local 714240 tasks (100%), remote 0 tasks (0%), cloud 0 tasks (0%)
Core 1  connection closed
Core 2  connection closed
Core 3  connection closed
Core 4  connection closed
Core 5  connection closed
Core 6  connection closed
Core 7  connection closed
Core 8  connection closed
Tester  714240 new records saved to cache file 'tester\cache\Test.FILTER_EURUSD.rann_RannForex.M1.20180226.20180909.40.2D734373DF0CAD251E2BD6535A4C6C84.opt'

During those 7.5 hours, the SSD was being accessed with a huge frequency. If ticks were read on each pass, that works out to an average of 26 times per second for 7.5 hours. Hence such a wild blink - more than 700 thousand reads.


Single run log

Core 1  FILTER_EURUSD.rann_RannForex,M1: 132843 ticks, 60283 bars generated. Environment synchronized in 0:00:00.140. Test passed in 0:00:00.827 (including ticks preprocessing 0:00:00.109).
Core 1  FILTER_EURUSD.rann_RannForex,M1: total time from login to stop testing 0:00:00.967 (including 0:00:00.140 for history data synchronization)
Core 1  322 Mb memory used including 36 Mb of history data, 64 Mb of tick data

As seen, ~130K ticks and 60K bars are used (the "All history" mode is selected in the Tester). I.e. a very small amount of history.

The history of custom symbol in the Terminal contains the following amount of history data

Saved ticks = 133331
Generated Rates = 60609

I.e. in the history of the symbol is very little more than used by the Tester.


ZS It's a shame to look at the SSD... How much faster could the Optimise be? Strange that the OS doesn't cache this data, since it's less than 7MB ticks in uncompressed form.


What folder of Terminal via mklink to RAM-disk, so that the data is not read/written from SSD, but from memory? I'm willing to provide the data, what kind of speedup this would give in Optimisation.

 
Nikolai Semko:

Yes, this is archival. If I understand correctly, now ticks and minute bars are stored unpackaged, i.e. for a bar(MqlRates structure) it is 60 bytes, and for a tick(MqlTick structure) it is 52 bytes.
It's horrible! Something has to be done about it a long time ago.

I understand that the main problem with compressed arrays is organization of quick access to each array element.

But even if we store only each 256th element of an array unpacked and store in other elements only increments to unpacked ones, we can see that the array size will be 4-5 times smaller and access time to each element will not increase too much (maybe 1-2 nanoseconds), but it will save enormous time on saving and reading of array from disk and to disk.

"Everything has already been stolen before you" (cz)

At the start of the day it's a full tick. Then bid and/or ask and/or flipper full, everything else in increments if available. An average of 10 bytes per tick.

Since access to ticks is strictly sequential, there's no problem arranging quick access to each element of the array

 

A big request to post the source of the "Tester\cache\*.opt" record. You can see from the contents that the format is very simple.

Working with the results of Optimization is very much needed. Thank you!

 

For some reason the performance of the tester drops as the number of trades increases. There is no reference to the trading history on the part of the Expert Advisor.

This does not seem to be the case.

 

In the Tester, the interval corresponding to the "All history" mode is memorised. I add history to the custom symbol, reload the Terminal, and the interval corresponding to "Whole history" remains unchanged.

I can't change the default mode, but if I select the whole history, I set the whole history manually. Please correct.

 

There is a cross missing in the marked place - deleting the corresponding line of the cache entry.

I do a lot of optimisations. Some have been out of date for a long time. And there is no mechanism for deleting these options. Sometimes you get a huge list and start searching among unnecessary variants.

Therefore please consider removing unnecessary data by cross in the place marked in the picture.

 
A100:
Error during execution

Result: true:false:7:4

How is it that strings of different lengths are suddenly equal? While comparison using StringCompare gives the opposite == result

Thanks for the post, we have changed the behaviour of character by character string comparison.

Previously strings were compared as Z-string (up to null character), now they are compared as PASCAL-string (taking length into account).

Existing codes with "normal" strings (without Z-character inside) won't be affected by this change.

 
A big request in the Tester is to close by Bid/Ask if the last known last is zero.