Machine learning in trading: theory, models, practice and algo-trading - page 2568
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Judging why?
Ran through the links on both his sites - there are implementations of the idea in MO. His point is implemented as a large set of contexts, each of which is a separate mesh (autoencoder, for example).
I ran across the links on both of his sites - there are implementations of the idea in the MO. His sense is implemented as a large set of contexts, each of which is a separate grid (autoencoder, for example).
Ah, you mean the realization of only the context? I thought about his AI as a whole, sorry, was not attentive.
Here is Alexey'sspeech where he talks about his AI model (without ancients, philosophy, etc.)
He speaks to the AGI community, these are the people who set trends in algorithms, are well acquainted with any networks, reinforcement learning, etc.
Very interesting, as for me, to see, understand and realize how primitive our approaches to the market are
AI - the ability of a machine to build an NS to solve arising composite problems :)
Initially you need an algorithm describing the obstacles before reaching the goal, and the AI must break them down into parts and build networks to solve - with the configuration of the network to determine itself.
In my opinion, the most important thing is to understand modern financial mathematics and MO used in it. This is necessary if only because they are used by all the big price shapers.
A lecture on the subject is on youtube.
In my opinion, the most important thing is to understand modern financial mathematics and MO used in it. This is necessary if only because they are used by all the big price shapers.
A lecture on the subject on youtube.
What is there to understand about financial math and MO, you need to know the mechanics of the market and its players.
The crowd is bound to lose in most cases, because its counter-agent is a "major player".
1) it is necessary to see the imbalance of retail buyers and sellers, for example if there are many sellers, then on the other side of the deal there is a "major player" (the buyer)
For example now on the Jew, there are a lot of sellers
2) There is also trading in the moment against the crowd - this is a market maker
You can see that the price always moves against the crowd (inverse correlation).
While the crowd is buying and believes in growth, the price will fall and vice versa...
That's the whole market.
p.s. And I'll watch the video for sure
So, for each predictor a rule like 0.5<X<7.3 is taken,
Yeah, let's say that.
Then we plot the number of all possible combinations
No, now we take the execution of the inequality rule as one and look at the average value of the target (say, in binary classification) when the rule is triggered by the sample, if the initial average value, say, 0.45 in the sample, and after evaluation only by responses became 0.51, then we consider that the predictor (its plot/quantum) has a predictive power of 0.06, i.e. 6%.
Let's gather a set of such predictors with sections, which are already independent binary predictors, and use them to build a model.
Combining all such quanta with all (with or without predictive power) is really not a quick matter, but it may not be unreasonable if done with a basic predictor on which a quantum with predictive power is identified.
In general, with small N (depends on sample size) it might work, but with large ones it would be overtraining.
But even in theory this retraining will be less, since there are fewer possible combinations than there were in the full sample.
It remains to understand why such quantum regularities may work for 7 years and then suddenly stop...
Tests for different kinds of trends out of the box
R best!!!!
No, now we take the execution of the inequality rule as one and look at the average value of the target (let's say for binary classification) when the rule is triggered by the sample, if the original average value is, say, 0.45 in the sample, and after evaluation only by responses became 0.51, then we consider that the predictor (its plot/quantum) has a predictive power of 0.06, i.e. 6%.
There is not much difference (in terms of combinatorics) how exactly this is coded. The point is the same - each line has as features what rules are followed and what rules are not. This is always 2^N choices, where N is the number of rules. We then choose whether each of these rules is included or not in the final set, which is 2^(2^N) variants. It is clear that it is simply unrealistic to formally enumerate such a set of variants. That's why it makes sense to arrange them in a reasonable way. For example, first we take all variants that are described by only one rule, then by only two, etc. Or something like that.
It remains to understand why such quantum regularities can work for 7 years and then suddenly stop...
Sooner or later many other players will find them, for example.