Machine learning in trading: theory, models, practice and algo-trading - page 1510
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Oh, what a bunch of people. Are you the Trickster I'm talking about? Notice I capitalized it :-)
Because you're telling the truth, what's got into you? Well done, sit down, five. I'd like to add, that when splitting the space of points described by these input values, in our case it is a multidimensional space, the main thing is to divide the area so, that they would not fall into the Yes or No group, and it is important that the FUTURE values of input vectors, too, scatter them correctly on both sides of the barricades. Ours and the enemy's. But to get the network to work in the future, it is necessary not just to divide the current, but to divide it so that the coefficients of the polynomial could work independently without input data. Only in this case will the grid work. Long rack my brains about calculating the level of generalization of the resulting polynomial, but since the result of generalization also lies in the future and it is impossible to reliably calculate it can only assume, hence any methods of determining the generalization are indirect. As an option: When obtaining the coefficients of the polynomial, to do inverse optimization.......xm.... I need to try...
I will add. a neural network should clearly "understand" and guess that it is the same pattern. it is possible to express "sense" of evacuation in tens of other ways, a person who knows something about traffic rules and traffic organization will easily define that in essence it is the same pattern.The main thing in this particular pattern is "evacuation". How the evacuator and the evacuated car are schematically marked, what color and size is a matter of tenths. The same is in the markets, the same patterns that have the same meaning may be very different visually (due to distortions caused by fractalityof market charts), and vice versa, the same at first glance chart squiggles - to be "meaningful and different" patterns / tablets. This is just the way waves of different dimensions are formed at a given moment. Owls are not what they seem (c) Twin Peaks :)
A neural network should understand "sense", without it there is no way. Let it identify patterns badly, make mistakes, work indistinctly like a brain, but it has to grasp the sense at least slightly, this is more important than clear recognition of "pictures".
You can come up with very similar signs, like on the picture, the usual neuro-network will most likely confuse them with this sign, but in terms of logic MA they will have a completely different meaning. You can even think up and draw yourself for training your own natural neuro-network - I'm too lazy :)
A neural network must understand the "meaning", there is no way without it. Let it poorly identify patterns, make mistakes, work indistinctly like a brain, but it must grasp the meaning at least a little, this is more important than clear recognition of "pictures".
You know what a car looks like, don't you? Do you remember your drawings as a child? .... and imagine that you have never seen any other transport except horses, and here is such a silly sign - a black "square with holes" ))))
Are you confident in the strength of your intellect that you can understand the meaning of such a sign?
Oh, what a bunch of people. Are you the Trickster I'm talking about? Notice I capitalized it :-)
This man (is he a man?) is the obverse of the Grail, and Vizard_ is its reverse. The Grail itself cannot be seen by people, it is not allowed.
Eh, it's a pity there's not Alyosha the son, killed by villains-investors.... Those were the days, life was boiling here. And now... Ugh!
This man (is he a man?) is the obverse of the Grail, and Vizard_ is its reverse. The Grail itself cannot be seen by people, it is not allowed.
Eh, it's a pity there's not Alyosha the son, killed by villains-investors.... Those were the days, life was boiling here. And now... Ugh!
I just have my mind made up. No searching. Dull monotonous optimization time after time without any search and adventure.
"Take that thing back" :))
"Take that thing back" :))
well yeah )))
Well, a little more thought - people tend to be captive (in an illusion? in general, in cognitive distortions - so it is fashionable to call now a delusion)
it is the same with MO and any discussion of capabilities of computer technology or robots - they are all rubbish, humans are much cooler!
Let's take simple examples:
1. Newton was hit by an apple (which was not the case) and he invented his ingenious formulas! - what sample of people should be taken so that when bashing them in the skull with apples there was a similar result? or maybe it is easier to run such a problem on a PC, and let it rotate all possible data and still find the solution of this problem?
2. take the team of aviation developers, they have experience and good software, so why do they test it in the wind tunnel after developing a new fuselage? - They are geniuses and even the PC helps them?
why am i writing this? - the point is that 99% of inventions are random, and the mathematical apparatus itself with all its complexity cannot describe elementary things (how the wind blows!)
and to think, that the person is a crown of creation, and the computer programs are "stupid counting", imho, this another delusion - the person makes himself a genius by casual actions (physical or mental actions), the MO is engaged in the same - search of the decision of a problem by carrying out casual actions
ЗЫ: advantage of the person before the machine it is only presence of associative thinking, though and here it is possible to argue how much this advantage? - Sometimes human's previous experience hinders more than helps in solving a new problem, while associative memory will suggest looking for a solution on the basis of its positive previous experience (((
At first they were pelting apples, but then they realized it was Monte Carlo:))
At first they were throwing apples at me, but then they realized it was Monte Carlo :))
Monte Carlo what good, because it has no clear rules for the initial conditions, but has a pretty good statistical error in assessing results
I do not know how, I would like to do some mix of Q-learning + Monte Carlo, but not in the tester, but in visualization mode, like they teach NS game Angry Birds
Monte Carlo what good, because it has no clear rules for the initial conditions, but has a pretty good statistical error in estimating the results
I do not know how, I would like to do some mix of Q-learning + Monte Carlo, but not in the tester, and a visualization mode, about as they teach NS game Angry Birds
https://medium.com/datadriveninvestor/teaching-a-robot-to-buy-low-sell-high-c8d4f061b93d
On artificial data works as in the article, run. But then it's back to non-stationarity :)
maybe if you take a differentiated stationary series from my article, it will be something interesting
And yes, as far as cunneling works with MDP, now they try to insert LSTM layers to make the model have more memory. Like in article of author of this thread on Habra.