Machine learning in trading: theory, models, practice and algo-trading - page 2948

 
Maxim Dmitrievsky #:
One less crutch, the range of models used will be greatly expanded (before, mostly everyone optimised weights via terminal inputs). Apparently, it should work on mac too, I'll check it soon :) sometimes it's nice to do nothing and wait for food to fly into your mouth by itself

Well, I got involved in this case to raise the level of python - I can't do without it now) I think to start with LightGBM. There seem to be two ways to get ONNX for it - onnxmltools and skl2onnx.

 
Note that native execution of onnx models allows you to easily and very quickly run them in the tester and claudnet without system overhead, which is almost impossible with third-party integrations.

It is during strategy testing that all these 'what's the big deal about losing 50ms per call' will increase testing time by thousands of times.
 
Evgeny Dyuka #:
I hear this legend about the importance of speed all the time, but I can't understand where it is important.
Taking into account spread and exchange/broker commissions, you need to forecast for a time measured in tens of minutes or hours. What does 50 milliseconds difference have to do with it ?
How exactly in real life does beating MQ over fxsaber by 5 milliseconds help you ?

Suit yourself, but I wouldn't mind even a small speed increase. Both in testing/optimisation and in trading.

 
Renat Fatkhullin #:
Note that the native execution of onnx models makes it easy and very fast to run them in the tester and claudnet without system overhead, which is almost impossible with third-party integrations.

It is during strategy testing that all these 'what's the big deal about losing 50ms per call' will increase testing time by thousands of times.
I am afraid that optimisation in the cloud will not work. The point of optimisation is to change TS parameters. For example, TP/SL selection. When they are changed, the data for training changes. I.e. it is necessary to train the model with each variant of parameters, and for this purpose the MO software (Catbust, neural network, etc.) should be installed. It is unlikely that in the cloud someone will have the required software, of the right version, installed.
.

So training can only be done in the tester on the developer's machine.
And it makes no sense to upload the finished model to the cloud.

 
Maxim Kuznetsov #:

It can be simpler...hook up Redis, get RedisAI with PyTorch, ONNX, TensorFlow support, and if desired, distribute the load across nodes and clouds.

Our aircraft has on board a swimming pool, a dance floor, a restaurant, cosy recreation areas, a winter garden... Dear passengers, fasten your seatbelts, now we're going to try to take off with all this shit.

 
Forester #:
I am afraid that optimising in the cloud will not work. The point of optimisation is to change the parameters of the TS. For example, TP/SL selection. When they are changed, the data for training changes. I.e. it is necessary to train the model with each variant of parameters, and for this purpose the MO software (Catbust, neural network, etc.) should be installed. It is unlikely that in the cloud someone will have the required software, of the right version, installed.
.

So training can only be done in the tester on the developer's machine.
And it makes no sense to upload the finished model to the cloud.

To be fair, a model is not necessarily a completed TS. For example, the model predicts a price increment, and in the EA parameters a threshold value for the predicted increment is set, which the EA tries to trade.

 
Aleksey Nikolayev #:

Our plane has on board a swimming pool, a dance floor, a restaurant, cosy lounge areas, a winter garden... Dear passengers, fasten your seatbelts, now we're going to try to take off with all this shit.

IMHO this is just about the current development of MQL. An attempt to cram everything inside at once instead of integrations.

 
Maxim Kuznetsov #:

IMHO this is just about the current development of MQL. An attempt to cram everything inside at once instead of integrations

+

 
Maxim Kuznetsov #:

IMHO this is just about the current development of MQL. An attempt to cram everything inside at once instead of integrations

Paths for integrations have always been open:

  • Native DLLs
  • .NET DLL
  • HTTP/HTTPS
  • Raw Sockets
  • Files/Pipes
  • SQLite
  • Python library


But it's the native language integrations that make it possible to write complete applications.

When it comes to ML, we've worked on and implemented vectors, matrices and operations on them as a basis for machine learning:

  • vectors, matrices and operations with them as a basis for machine learning
  • integration with Python, including launching Python programmes in the terminal as ordinary scripts
  • use of native ONNX models, which opens a huge door to the practical application of neuromodels

We manage to create complete and fast solutions.

Words about "trying to cram" indicate only a negative attitude without rational justification. Especially since the availability of possibilities does not limit the writer in any way.
 
Stanislav Korotky #:

Give a link to relevant documentation, please. Or don't give me the pathos. R is a monstrous thing in its own right. You suggest to study an encyclopaedia instead of a simple answer to a specific question.

Nobody in the world studies an encyclopaedia, they study a specific article. I have given links to a very specific article. But you will get not only an answer to your theoretical question but also a working code.