I got the conversion to ONNX models for LightGBM in python 3.10 and 3.11, with onnxmltools and onnxconverter_common packages. The output worked only in python 3.10 with the onnxruntime package, which did not fit in 3.11. Maybe something has changed in the last three weeks.
Too bad that ME5 doesn't support the python py launcher.
What to change, what numbers in what range to insert, etc. Mechanical work, please
instrument/timeframe and dates in copy_rates_range, number of input close prices for the forecast (here time_step = 120 and input_shape=(120,1) in model.add(Conv1D)), - in this case it is the number of hourly close prices on which the next price forecast is based.
the architecture of the models itself, e.g.
model = Sequential() model.add(LSTM(units = 50, return_sequences = True, input_shape = (time_step, 1))) model.add(Dropout(0.2)) model.add(LSTM(units = 50, return_sequences = True)) model.add(Dropout(0.2)) model.add(LSTM(units = 50, return_sequences = True)) model.add(Dropout(0.2)) model.add(LSTM(units = 50)) model.add(Dropout(0.2)) model.add(Dense(units = 1)) model.compile(optimizer='adam',loss= 'mse',metrics=[rmse()])
EA parameters
input double InpLots = 1.0; // Lots amount to open position input bool InpUseStops = true; // Use stops in trading input int InpTakeProfit = 500; // TakeProfit level input int InpStopLoss = 500; // StopLoss level
the trading algorithm itself, etc.
instrument/timeframe and dates in copy_rates_range, the number of input close prices for the forecast (here time_step = 120 and input_shape=(120,1) in model.add(Conv1D)) in the EA changes
the architecture of the models itself, e.g.
EA parameters
the trading algorithm itself, etc.
Thank you
instrument/timeframe and dates in copy_rates_range, the number of input close prices for the forecast (here time_step = 120 and input_shape=(120,1) in model.add(Conv1D)) in the EA changes
the architecture of the models itself, e.g.
EA parameters
the trading algorithm itself, etc.
Could you please tell me where to throw these files? (I downloaded them from the site, and there is no installer, just files in a folder).
GPU calculations were performed on an NVIDIA GeForce GeForce RTX 2080 Ti graphics card using the libraries ... and CUDNN 8.1.0.7.
Could you please tell me where to throw these files? (I downloaded them from the site, and there is no installer, just files in a folder).
There is a video Setting Up CUDA, CUDNN, Keras, and TensorFlow on Windows 11 for GPU Deep Learning on how to install them
In the 1st comment of the video, note that you need to explicitly specify the tensorflow 2.10.0 version.

- 2022.01.05
- www.youtube.com
There is a video Setting Up CUDA, CUDNN, Keras, and TensorFlow on Windows 11 for GPU Deep Learning on how to install them
Got it!
Could you please tell me where to experiment here?
What to change, what numbers in what range to insert, etc. Mechanical work, please
There is no point in experimenting there, because such a forecast does not differ from a naive one (the value of the previous closing price is taken as a forecast). In this case, you really get almost the smallest learning error (RMS), which says nothing about the predictive ability of the model. Rather, it is an educational example on ONNX that even complex architecture can be easily transferred to the terminal. I don't know what the authors of that article on research of neural network architectures for time series forecasting were smoking :) here either the estimation is needed adequate, or classification instead of regression.
Options of metrics for experiments

- Pablo Cánovas
- medium.com

- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
You agree to website policy and terms of use
New article How to use ONNX models in MQL5 has been published:
ONNX (Open Neural Network Exchange) is an open format built to represent machine learning models. In this article, we will consider how to create a CNN-LSTM model to forecast financial timeseries. We will also show how to use the created ONNX model in an MQL5 Expert Advisor.
There are two ways to create a model: You can use OnnxCreate to create a model from an onnx file or OnnxCreateFromBuffer to create it from a data array.
If an ONNX model is used as a resources in an EA, you will need to recompile the EA every time you change the model.
Not all models have fully defined sizes input and/or output tensor. This is normally the first dimension responsible for the package size. Before running a model, you must explicitly specify the sizes using the OnnxSetInputShape and OnnxSetOutputShape functions. The model's input data should be prepared in the same way as it was done when training the model.
Author: MetaQuotes