Machine learning in trading: theory, models, practice and algo-trading - page 585

 
Maxim Dmitrievsky:

It's hard to evaluate the trading ones in this way, because there's deal duration and stop-loss levels have to be added to everything else, and it also needs to be retrained periodically... so what a mess :)

Yes, I saw it a long time ago. In itself it is not bad, but cloudiness is not very suitable for building the TC.
 
Yuriy Asaulenko:
I have seen it for a long time. I have seen it for a long time. It is OK in itself, but cloudiness is not very good for my TS.

You can sell signals:)) access through the api, if the model is cool

 

Sitting. reading a pdf of the MoD monograph. Quote:

It turns out that there's no need to twitch either, NS seems to be the best option.

 
Yuriy Asaulenko:

Sitting. reading a pdf of the MoD monograph. Quote:

It turns out there's no need to twitch either, NS seems to be the best option.


And I read Haykin and simultaneously watched

The movie is atmospheric... what will win in the end? Protein life or artificial life, or will something in between be created? :)

by the way, some sources say that probabilistic NN is in fashion nowadays. My friend whispered... but he is very experienced in them, he takes part in cagle contests

 
Maxim Dmitrievsky:

And I read Heikin and watched

The movie is atmospheric... what will win in the end? Protein life or artificial life, or will something in between be created? :)

By the way, some sources say that probabilistic NN is in fashion nowadays. My friend whispered... but he knows a lot about them, he participates in contests at Cagle

Yesterday I found convolutional NN - usually used for image recognition. Naturally, there are all the utilities - training, etc. Made for use in Python.

There are also recurrent, etc., but that's not very interesting yet.

Since the convolutional network is not fully connected, we can greatly increase the number of neurons without loss of performance. But I need to understand the details, I haven't got into it yet.

Popular description -https://geektimes.ru/post/74326/
Применение нейросетей в распознавании изображений
Применение нейросетей в распознавании изображений
  • 2005.11.09
  • geektimes.ru
Про нейронные сети, как один из инструментов решения трудноформализуемых задач уже было сказано достаточно много. И здесь, на хабре, было показано, как эти сети применять для распознавания изображений, применительно к задаче взлома капчи. Однако, типов нейросетей существует довольно много. И так ли хороша классическая полносвязная нейронная...
 
Yuriy Asaulenko:

Yesterday I found a convolutional NS - usually used for image recognition. Naturally, there are all the utilities - training, etc. Made for use in Python.

There are also recurrence, etc., but that's not very interesting yet.

Since the convolutional network is not fully connected, we can greatly increase the number of neurons without loss of performance. But I still need to understand the details - I haven't got into it yet.

Popular description -https://geektimes.ru/post/74326/

Well, this is deep, they are mainly used for images and computer vision. You need a lot of examples and layers to make it work. The architecture itself copies the visual system

Try PNN in python, it makes more sense to predict time series.

https://habrahabr.ru/post/276355/

Байесовская нейронная сеть — потому что а почему бы и нет, черт возьми (часть 1)
Байесовская нейронная сеть — потому что а почему бы и нет, черт возьми (часть 1)
  • 2029.02.16
  • habrahabr.ru
То, о чем я попытаюсь сейчас рассказать, выглядит как настоящая магия. Если вы что-то знали о нейронных сетях до этого — забудьте это и не вспоминайте, как страшный сон. Если вы не знали ничего — вам же легче, полпути уже пройдено. Если вы на «ты» с байесовской статистикой, читали вот эту и вот эту статьи из Deepmind — не обращайте внимания на...
 
Maxim Dmitrievsky:

Well, this is the backwoods, they are mainly used for images and computer vision. It takes a lot of examples and layers to make it work. The architecture itself copies the visual system

Try PNN in python, they make more sense for time series prediction.

https://habrahabr.ru/post/276355/

Once again, I'm not predicting anything. I only have a classification.

I've been looking for an incomplete network for a long time. MLP is all good, but every neuron has all inputs at once. Ah, that's exactly what we need, so that only 5-6 inputs with a shift go to the neuron, and this is the convolutional NS.

There's nothing complicated here, and you only need 100-150 neurons, so the structure is simple, and the speed will be like the MLP with 60 neurons, due to a smaller number of inputs from neurons.

 
Yuriy Asaulenko:

Once again, I am not predicting anything. I only have a classification.

I have been looking for an incompletely connected network for a long time. MLP is all good, but all inputs go to each neuron at once. Ah, that's exactly what we need, so that only 5-6 inputs with a shift go to the neuron, and this is the convolutional NS.

There's nothing complicated here, and you only need 100-150 neurons, so the structure is simple, and speed will be like MLP with 60 neurons, due to a smaller number of inputs from neurons.


Well, there's a classifier, and what prevents you from looking for an incomplete one. I just like this scheme, for example:

I like this scheme, for example: "I'll make all the screenshots of the book :)


 
Yuriy Asaulenko:

Once again, I am not predicting anything. I only have a classification.

I have been looking for an incompletely connected network for a long time. MLP is all good, but all inputs go to each neuron at once. Ah, that's exactly what we need, so that only 5-6 inputs with a shift go to the neuron, and this is the convolutional NS.

There's nothing complicated here, and we only need 100-150 neurons, so the structure is simple, and speed will be like MLP with 60 neurons, due to less number of inputs from neurons.

The idea to use convolutional layers has been simmering for a long time. I think they can give good results.

But don't throw away the multilayer perseptron. Converging networks are not learning anything by themselves, they just give some compact image of the input information.

 
Maxim Dmitrievsky:

Well, there is a classifier, and what prevents you from looking for incomplete.

So try to find one.) Such an MLP would be optimal.