The historical data growth for the custom symbol is very rapid, possibly reaching several GBs in a single day.
It looks like you are using CustomRatesReplace where data from websocket is delivering more updates than 1 update in a 1 minute candle , If you are getting live data then there can be thousands of updates in 1 candle that is why its increasing the size of hst file. If you update data only 1 time in 1 minute (only listens 1 frame of Kline in 1 minute) then you can significantly reduce the size of hst file.
I use the following code to import 1M real-time data, which means importing the data from the previous minute every minute. However, I find that the corresponding data on the disk grows extremely fast over the course of a day, and my disk space is simply not enough. Could you please let me know if there is any problem with my code?"
Yes, your script is continuously receiving real-time data via WebSocket and storing it on the disk. This can indeed lead to rapid growth in the size of the data stored on disk, especially if the script is running continuously like this without any data management mechanisms in place, so you should check that
Your code appears to be replacing the existing rate data for the specified symbol and time range using the CustomRatesReplace() function. This means that every minute, the entire history of the previous minute is being stored, leading to a significant increase in disk space usage over time.
CustomRatesReplace : Fully replaces the price history of the custom symbol within the specified time interval with the data from the MqlRates type array.
It looks like you are using CustomRatesReplace where data from websocket is delivering more updates than 1 update in a 1 minute candle , If you are getting live data then there can be thousands of updates in 1 candle that is why its increasing the size of hst file. If you update data only 1 time in 1 minute (only listens 1 frame of Kline in 1 minute) then you can significantly reduce the size of hst file.
Yes, your script is continuously receiving real-time data via WebSocket and storing it on the disk. This can indeed lead to rapid growth in the size of the data stored on disk, especially if the script is running continuously like this without any data management mechanisms in place, so you should check that
Thank you for your response. My understanding of this interface is that it replaces the original data, meaning that the data corresponding to the same time point can be continuously replaced, but the data that should ultimately be stored in the database should be the last data. The data on the disk should not grow this fast. If my understanding is incorrect, could you please let me know what interface I should use to implement real-time import of candlestick data? In reality, I only update once per minute for a single symbol, 1440 times a day, and the space still grows quite fast. I really want to know how Terminal itself manages to update 1M data in real-time while taking up very little space.
Run this and check hcc file size
MqlRates rates[]; int nums = iBars(_Symbol, PERIOD_CURRENT); nums = nums>100000? 100000 : nums; if(nums == CopyRates(_Symbol, PERIOD_CURRENT, 0, nums, rates)) { if(nums == CustomRatesDelete(_Symbol, rates[0].time, rates[nums-1].time)) { CustomRatesUpdate(_Symbol, rates); } }
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
You agree to website policy and terms of use
I use the following code to import 1M real-time data, which means importing the data from the previous minute every minute. However, I find that the corresponding data on the disk grows extremely fast over the course of a day, and my disk space is simply not enough. Could you please let me know if there is any problem with my code?"