Machine learning in trading: theory, models, practice and algo-trading - page 739
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
and the word "curvafitter" is something neural?
no, it's something obscene
It's not the quality of the network that will go down but the overfit will be less and your model will appear in all its glory and squalor
Come on,come on. Watch this... ba ba ba ba...
And you keep on learning with your models in a thousand bar.....
You don't understand the physical meaning of generalization. Everything brilliant is simple. A model doesn't have to be complicated and the simpler it is the better... Good luck!!!!
By the way this is the same model running for the second week and yes, trained on 40 points....
Well, well.... Watch this... ba ba ba ba...
And you keep f-ing around with your models at a thousand bar.....
You don't understand the physical meaning of generalization. Everything brilliant is simple. A model doesn't have to be complicated and the simpler it is the better... Good luck!!!!
This is by the way the same model running for the second week and yes, trained on 40 points....
Although you may have a much better result than mine. Then how much better than mine....
You still can not understand that I do not submit to the network ALL bars for the period, I submit and a selective number, thus do not load the network useless work on unnecessary bars. For example in Asia or at night. But you give it the entire market and make it always be in the market. So it gives you the bullshit in the end.
But I'll say it again. The winner of this argument will be the one with the balance curve growing at the proper level.....
This is still the same model running for the second week and yes, trained on 40 points....
Wait 2 months, I think that is the minimum required time to make sure that everything works correctly. During this time anything can happen.
Wait two months, in my opinion this is the minimum required time to make sure everything works properly. During this time, anything can happen.
I agree, but who keeps the model so long. I think this week will be another week, at most one more week and then to re-training......
People who mindlessly demand a larger sample size are people crippled by the limit theorem: they subconsciously believe that a larger sample size will make the estimates more likely to be true.
And I notice that at the subconscious level, because usually all these people know that the sample size does not guarantee from plum.
Why on earth would we say anything about the future by estimating non-stationary noise?
If we are discussing theMihail Marchukajtes model, there are two circumstances that are ignored because of the sample size discussion:
Discussing sample size without these two positions is a path to "garbage in, garbage out."
PS.
In medicine, sample sizes of a few dozen examples are commonplace. Nevertheless, the resulting statistics on such samples form the basis of decisions about the use of drugs.
Why?
Because when a drug is created theoretically, a lot of effort is spent on justifying the effect of the drug on the disease.
The only thing that makes us different is that we lump everything together. Look at this thread: 99% about perseptrons and almost nothing about datamining.
Well, well.... Watch this... ba ba ba ba...
And you keep on learning with your models at a thousand bar.....
You don't understand the physical meaning of generalization. Everything brilliant is simple. A model doesn't have to be complicated and the simpler it is the better... Good luck!!!!
By the way, this is the same model, which is working for the second week and yes, it is trained on 40 points....
I did it in 3 days, I had +150%, then the market will change and so ...
The percentage of the deposit doesn't mean anything. I have a small balance, so this parameter is off the charts, and otherwise the Profit factor is important. It is for three, which is very good. At two it is good sometimes...
The week is not over yet, so we wait.....
Dobro Post!!!!
Based on the idea that the polynomial itself is not important, the method of obtaining it is important, you can do the following experiment.
You prepare your dataset in the size of 100 points. You reserve the OOS interval, but send me just a trace. ALL possible inputs and some outputs with 0 and ones (if there is a possibility of outputs variants).
In the end, the table should contain 100 columns (these are inputs), 5-6 columns of outputs and all for 100 lines.
I build models and send them to you, say 10 pieces. Then you save the result of these models on a trace plot and send it to me again. I tell you which of the 10 models works. You put it and run it on the section OOS....... But only with the condition of sub-level reports with graphs and numbers. Maximum open experiment...
How about this plan????