Machine learning in trading: theory, models, practice and algo-trading - page 3502
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
In the Mahabharata, the Bhagavadgita is important, where Krishna explains to Arjuna about karma yoga. "Do what must be done and be what will be" - very well suited to trading, for example)
Yes, it's probably the most famous and powerful text :)
Well, this philosophy teaches humility... but it's not good for development.
Development is usually (very often) confused with a change of scenery. Especially when memory is bad.
Hmm... try explaining how your approach differs from clustering/mode detection. What are the advantages.
Well, different approaches, my method distills the data, especially relevant in case of strong class imbalance. Along with the sifting process, the data is explored at each iteration. Initially, it is an exploratory method for selecting predictors and quant tables. An additional goal is to facilitate training other classifiers by removing examples in the sample over difficult to detect predictor ranges.
The process can be visualised as a tree like this
The tree has gone through 2 iterations and has come to a third. In the oval is the data that we will no longer explore further when building the current model - class "0".
And this is how I did 100 iterations and saw what probability to make a split that will be as effective in separating zeros from ones on new data.
Well, different approaches, in my method there is distillation of data, especially relevant in case of strong class imbalance. Along with the sifting process, the data is explored at each iteration. Initially, it is an exploratory method for selecting predictors and quant tables. An additional goal is to facilitate training other classifiers by removing examples in the sample over difficult to detect predictor ranges.
The process can be visualised as a tree like this
The tree has gone through 2 iterations and has come to a third. In the oval is the data that we will not investigate further when building the current model - class "0".
And so I did 100 iterations and saw what the probability was of making a split that would be just as effective at separating zeros from ones on the new data.
Human hubris does not lead to anything good. Especially when multiplied by Dunning-Kruger.)
My message was that radicals drive progress. Humility is already at the stage of waiting for death, accepting the inevitable and without the possibility of any other outcome.
Usually (very often) development is confused with a change of scenery. Especially when memory is bad.
Maybe. Much depends on the context. It is not very convenient to talk about such topics on the forum - you need to feel that the interlocutor is on the same wavelength and understands the word-weaving.
How is it different from clustering?)
You'd better tell me what you see as the similarities. Unless the goal is similar - to improve learning.
In my case, learning happens with a teacher, clustering without a teacher.
You'd better tell me what you see as the similarities. Unless the goal is similar - to improve learning.
In my case, learning with a teacher, clustering without a teacher.
The similarity is dividing into groups and analysing those groups afterwards.
Well, different approaches, in my method there is distillation of data, especially relevant.........
My message was that radicals drive progress. Humility is already at the stage of waiting for death, accepting the inevitable and without possibility of any other outcome.