Machine learning in trading: theory, models, practice and algo-trading - page 2496

 
Vladimir Baskakov #:
When will the practical application begin?

When will you get off your ass and start applying... instead of terrorizing the whole branch (for the second time), harassing signals for a trade... - my trade is not your trade!...your practice is not my headache... - apply as you want and as you see fit...

 
JeeyCi #:

when you get your ass out of your chair and start applying... not terrorizing the whole thread (for the umpteenth time) by harassing signals for a trade... - my trade is not your trade!...your practice is not my headache... - apply as you want and as you see fit

Unmotivated aggression suggests no practical implementation, just blabla
 
Vladimir Baskakov #:
... speaks to the fact that ...

says that all your previous trolling and rudeness turns out to be Reactions and Consequences of not responding to you -- you bring nothing constructive in return... and no one is obliged to generate market entries for you (simply because you know nothing but pushiness and inadequacy)

 
JeeyCi #:

tells me that all your previous trolling and rudeness turns out to be Reactions and Consequences of not responding to you -- you bring nothing constructive to the table in return... And no one is obliged to generate market entries for you (simply because you know nothing but how to take by pushing, begging and inadequacy)

Examples are possible?
 
Vladimir Baskakov #:
Examples are possible?

And you can't take my time for yourself... you are 0 on the branch (see previous ~3000 pp)

 
JeeyCi #:

and you can't squeeze out my time for yourself... you're 0 on the branch.

I see, no examples.
 
Vladimir Baskakov #:
When will the practical application begin?
already started
Интеграция прогнозов нейросети в MetaTrader 5
Интеграция прогнозов нейросети в MetaTrader 5
  • www.mql5.com
⚠️  Файлы обновились 08.11.21, текущая версия 1.4        Что нового:        1. Качество прогноза по EURUSD поднялось до 63%, по BTCUSD
 
JeeyCi #:

By the way tensorflow.keras (as Evgeny Dyuka has) - then

SKLearn seems more interesting - Interpretation of Machine Learning Results (maybe the library is not very good, but the evaluation logic is given)

p.s.

you didn't attach...

I agree that the ranking of the features that we submit to the NS is interesting, but no more than that. What do we get at the output? If we take as axiom (or postulate) the statement that the current price contains everything, then any of signs is important, no matter what place it takes in the ranking, especially since they are not so many and I can prioritize them without SKLearn. Or explain if I missed something. Only in simpler way and then I'm with your next message was sitting 15 minutes, what would get to the essence of the sentence))))
 
JeeyCi #:

to the logic ... that NS is used when it is necessary to bypass the lack of a formula describing the dependence of a trait on a factor... weighting is used... but before and after NS, standard/classical statistical processing is in effect... For example having only PDF=F'(X)=dF(x)/dx (although we don't need CDF, because all conclusions from population analysis are made by PDF) and having volatile data - first of all I need to bring distributions to uniformity for possibility of their joint analysis - and here weighting is helpful (here I don't aspire to mathematics)... but the analysis itself has nothing to do with NS, nor do its conclusions to it (ns)... although such estimation may be crude, but the classical statics is also imperfect (e.g., the use of logarithms of increments already introduces trendiness into the conclusions by itself - a purely mathematical defect)... and any model has its Assumptions...

market participants do NOT wait for predictions, but assess risk and volatility and make their trading (and hedge) decisions based on this... It's just that in this analysis there are 2 variable factors - volatility and time window - and NS helps to bring the samples to uniformity (but you can use GARCH as well) to allow their joint analysis within one statistical model and helps to determine the horizon... in those moments when there is no mathematical formula, and you don't need it (everything in this world changes)... but by weighting, weighting and weighting again (for the sake of compression to a regression) - to enable joint analysis within one statistical model, and preferably without noise or at least with its minimization...

Bayesian inference logic for Gaussian is worth to be kept in mind...

The main thing, I suppose, is to build such NS architecture, that when neuronal layers pass on the way to the output the dispersion does not increase... imho (why to accumulate it, if it is available as it is, is a rhetorical question)... and then already classical logic of statistics... and even in very deep history there are not enough samples to qualitatively analyze robust moments (everything happens in life)... I guess outliers can happen in Mihail Marchukajtes classification model as well... (I need to think, how should the sequent deal with them?)

so far my imho is ... I'll also look at import scipy.stats as stats

p.s.

thanks for the link

I'm a bit confused by your next statement " NS helps to bring samples to uniformity". How's that?

Further - " the main thing, I suppose, is to build such an architecture of NS, that the variance does not increase when neuron layers pass on the way to the output ". I have a question, what do you mean by that, more details and more concrete. I just assume that there is some common sense in it that I can't grasp. By the way if you want to avoid the spread of ideas, then let's take it to a personal note, I would also be happy to share and hear your opinion. I have some thoughts on the fact that it is not the NS does not give us a reliable result, and we can not see the forest for the trees. Any ideas (and experiments with code accordingly) how this can be bypassed.

 
eccocom #:
. Or explain if I missed something. I was sitting with your next message for 15 minutes to get to the essence of what is stated) ))

Jason Brownlee (author of Deep Learning With Python and Statistical Methods for Machine Learning) -

- The 3 Mistakes Made By Beginners:

1. Practitioners Don't Know Stats
2. Practitioners Study The Wrong Stats
3. Practitioners Study Stats The Wrong Way

eccocom #:
then any of the attributes is important, no matter what place it takes in the ranking, especially since there aren't that many of them and I can rank them by importance without SKLearn.

Different attributes are important under different conditions... but if you're sure that you can rank them correctly in momentum, then you're AI (I don't know with what accuracy and with what error)...

what to input - decide by yourself, test yourself, don't forget to check your hypotheses yourself (Student's t-test is in statistics class of module scipy) ... in general, neuronki is a handy tool for overcoming difficulties of working with large samples in statistics, but it does not replace statistical logic, but implements it... including understanding that the sample should be representative, not from the ceiling (including number! and quality [heterogeneity] of samples)... something like this

Reason: