Machine learning in trading: theory, models, practice and algo-trading - page 2874

 
mytarmailS #:

how's that?

It doesn't work like that.

Recurrence nets can do that, but I'd like to do it without meshes.

 
Aleksey Nikolayev #:

Recurrent networks can do this, but I'd like to do it without meshes.

Do you need a memory of the previous states of the emission, or what?

 
mytarmailS #:

do you need a memory of previous states or something?

Yes, there will be some memory. For example, in the simplest case of an exponential mean, its value at the previous step is remembered. We need to generalise all this in a reasonable way.

 
Aleksey Nikolayev #:

Yes, there will be some memory. For example, in the simplest case of an exponential average, its value at the previous step is remembered. We need to generalise all this in a reasonable way.

just give AMO the last n-values as with the wizard, I understand that is not suitable, but I do not understand why?

 
mytarmailS #:

just give AMO the last n-values as with mashka I understand that it does not fit, but I do not understand why?

Ideally, the algorithm should receive as input all available history, which obviously grows over time. It should decide which pieces to chop it into and what to do with them.

 
mytarmailS #:

in the open, but you have to update the tokens every n-time.

If the token is dead, we send a request to the server, log in again, get the token and continue working.

That's the simplest thing I've stalled on.

If it's not ajax, I can help you sparse it.

 
Aleksey Nikolayev #:

Ideally, the algorithm should receive as input all available history, which obviously grows over time. It should decide which chunks to chop it into and what to do with them.

Alexey, you have no idea how much we are interested in the same things, this is the second coincidence of interests....

I'm not sure about the application and the reason for the application.

I have algorithms,

there's a simple one, relatively fast but superficial.

and there are complex and slow, and not completely finished, but deep...

These algorithms are terribly slow compared to regular AMO, thousands of times, maybe more, but they see the data differently.

 
Evgeni Gavrilovi #:

If it's not ajax, I can help you spar it

I can parse myself simple, the question is whether you can through requests, re-enter the profile and get a new token, you know how to do it?

 
Alexander Ivanov #:
I respect Maxim.
But I don't understand why he worships other people's developments, praising the same tensor.

We can do it ourselves, can't we?!
Not in a lifetime. Your problem is to write a TC, not neural networks. You'll get stuck.
 
Aleksey Nikolayev #:

I am interested in the topic of algorithms with arbitrary number of features. I know about recurrent networks - I would like the same, but without meshes.

- I am interested in the topic of algorithms with an arbitrary number of features. I know about recurrent networks - I would like the same but without meshes.

*You can use machine learning algorithms such as linear regression, logistic regression or decision tree, which can handle an arbitrary number of features. They require less computation than recurrent networks and are usually faster.

However, recurrent networks can be effective when you need to deal with sequences of data such as text, voice messages, or motion data. Recurrent networks can remember context from previous steps and use it to predict the next step. This can be useful, for example, when analysing text, where words can have different meanings depending on the context.


- What is meant is that recursive functions are constructed that can depend on an arbitrary number of arguments, but are defined by ordinary functions from a fixed number of arguments. The simplest example is the exponential mean.


*You have correctly described recursive functions that are defined through themselves. They can depend on any number of arguments, but usually a fixed number of arguments.

You gave an example of an exponential average, which is a recursive function defined as follows:

EMAvolume(t) = (volume(t) + (n - 1) * EMAvolume(t - 1)) / n

Where EMAvolume(t) is the exponential average of the trading volume at the t-th step, volume(t) is the trading volume at the t-th step, n is a constant that takes into account the weights of previous steps.

Recurrence functions can be useful for modelling sequences of data and are used in various fields such as finance, biology and computer science.


- Recurrence networks can be like this, but I would like one without meshes.


*If you want to work with recurrent functions without using recurrence nets, you can use machine learning algorithms such as linear regression, logistic regression, or decision trees that can work with sequences of data. They can use recurrent functions as features and predict outcomes based on a set of features.

However, these algorithms may be less efficient than recurrent networks when a large number of data sequences need to be handled, as recurrent networks have some capabilities to handle sequences efficiently that other algorithms lack.

Proceed

In some cases, recurrent networks can be difficult to train and have high computational complexity as they require a large number of computations to handle sequences. In this case, it may be appropriate to use machine learning algorithms such as linear regression, logistic regression, or decision trees, which require less computation and run faster.

However, if you are working with sequences of data that require context from previous steps, recurrent networks may be a better solution than other machine learning algorithms. In this case, you should consider all available options and choose n


- Ideally, the algorithm should receive as input all available history, which obviously grows over time. It should decide which pieces to chop it into and what to do with them.


*Yes, ideally the algorithm should be able to work with an arbitrary number of features and determine on its own which ones are the most important for prediction. However, this can be difficult for some machine learning algorithms that require manual feature partitioning or a fixed number of features. One algorithm that can handle an arbitrary number of features is gradient bousting. This algorithm uses a composition of small decision trees, each considering only a small number of features, and accumulates them into a sum to produce the final algorithm. This algorithm can independently determine which features are the most important to predict.

Neural networks with fully connected layers can also be used, which can also handle an arbitrary number of features. However, these networks may be less effective when dealing with sequences of data, as they do not take into account dependencies between steps.

Reason: