Machine learning in trading: theory, models, practice and algo-trading - page 3358
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Discussed this topic back in the days of Porksaurus, Matemat, Granite and Metadriver, that is a long time ago.
I haven't seen this topic, maybe I just missed it (I used to read more Cyberpawk). It's about models that produce probability distributions as outputs rather than specific numerical values. Not to say that this is a completely new approach, but there has been a noticeable upsurge of interest in this topic in recent years.
Well, there have been many attempts, but I don't know their successful public results. The simplest thing that has been done is to treat the output of a single neuron as the probability of a sell/buy in the range [-1.0;1.0], nothing good came out of that, applying a threshold doesn't help.
Another thing is that it is possible to apply the distribution of neuron outputs as a probability, but I have not seen anyone do it. For example, for the same sell/buy signals of the network output neuron during training, the distribution of values can be very different, so the behaviour on OOS will be different.
Besides, I have long ago shown graphs of training and behaviour on OOS, where the line goes without breaking, of course without spread, and the input was given increments of simple mashka from different timeframes, elementary. And here some geniuses suddenly made a "brilliant" conclusion that the spread affects the behaviour on OOS.
Well, there have been many attempts, but I don't know their successful public results. The simplest thing that was done was to treat the output of a single neuron as a probability of sell/buy in the range [-1.0;1.0], nothing good came out of it, applying a threshold does not help.
Another thing is that you can apply the distribution of neuron outputs as a probability, but I haven't seen anyone do it. For example, with the same sell/buy signals of the output neuron of the network during training, the distribution of values can be very different, so the behaviour on OOS will be different.
Besides, I have long ago shown graphs of training and behaviour on OOS, where the line goes without breaking, of course without spread, and the input was given increments of simple mashka from different timeframes, elementary. And here some geniuses suddenly made a "brilliant" conclusion that the spread affects the behaviour on OOS.
Still, classification is a relatively simple special case, in which the distribution of output is discrete and therefore everything is relatively easy to reduce to the usual "point", numerical MO problem.
A broader approach is interesting, with models for which the output is not a number, but any (within reasonable limits, of course) distribution. An example is the MO used in reliability theory (where the distribution of lifetime is studied) or in probabilistic weather forecasting (where a probability distribution is constructed for the possible amount of precipitation, for example).
...
A broader approach is interesting, with models for which the output is not a number, but any (within reasonable limits, of course) distribution. An example would be the MO used in reliability theory (where the distribution of lifetime is studied) or in probabilistic weather forecasting (where a probability distribution is constructed for the possible amount of rainfall, for example).
Forming a probability distribution during training, not after.
And after training, what is the point of doing anything at all? A hypothetical machine fool will not acquire new knowledge if he is tweaked with a screwdriver after training.
I have already described the example above. There is a classifier that passes the OOS, but the returns are distributed 60/40. You don't like it, you raise the decision threshold, but the situation doesn't change, and sometimes it gets even worse. You scratch your head as to why this is so.
The explanation is given: because in the case of real probability estimation the situation should change.
A solution is given.
How do you find morons like this Karpov?
The man's head is a mess. The man is incapable of coherent thought. It's just creepy!
From the first minutes he simply states that the classifier does not give probability. And where can you get the probability without using what the classifier gives?
How do you find morons like this Karpov?
The man's head is a mess. The man is incapable of coherent thought. That's just creepy!
Well you were invited to work in England with your mush too? )
the man doesn't care at all, he's doing fine.
Not getting the point != getting it wrong. These are people of a slightly different formation, that's probably the problem.
It's been obvious for a long time that the topic needs new blood. I'm already an oldfag too. The new ones will come and show, if the forum doesn't get mouldy at the end of course.
The worst part is that I understand what changes in the brain occur with age, and why people reason this way and not that way. This obviousness is sometimes hilarious, but there's no getting away from it.
Well have you been invited to work in England too with your porridge? )
the man doesn't care at all, he's doing fine.
Not getting the idea != not putting it right. These are people of a slightly different formation, that's probably the problem.
It's been obvious for a long time that the topic needs new blood. I'm already an oldfag too. The new ones will come and show, if the forum doesn't get mouldy at the end of course.
The worst part is that I understand what changes in the brain occur with age, and why people reason this way and not that way. This obviousness is sometimes hilarious, but there's no getting away from it.
What's this got to do with England?
You seem to be a qualified person, but you are constantly dragged to the rubbish bin.
You very rarely argue on the merits....
What's England got to do with it?
You seem like a qualified man, but you're always dragging yourself to the rubbish bin.
You very rarely make substantive objections...