You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
It is suitable, in genetic optimization mode by "Maximum of user criterion" GA of the tester will maximize the result obtained from double OnTester() - everything works, but there is a problem in automation "to cheer up GA" when it starts to converge around the found local maximum and does not want to search other variants by input parameters, in general in fine tuning GA has to help by adding conditions to OnTester() or by splitting of optimization parameters into several intervals
By the way, how to implement it? Also started thinking of doing it that way.
You mean "smart" - optimised and adapted to a certain environment?
Well, yes, selection is always performed by some criterion. By the way, the criterion itself may change with generations, if necessary.
You mean "intelligent" - optimised and adapted to a particular environment?
By the way, how do you implement this? I also started to think about doing it this way.
it is usual work with the tester - first you set all parameters for optimization, then you test until the moment when the tester begins to draw horizontal lines on each pass - this is a sign that GA has converged around one local maximum, then you see in the optimization tab what parameters GA has stopped changing, and then the next optimization runs, breaking these parameters (which GA does not change) on other intervals, although it is often enough to restart the tester by removing (saving) optimization caches - GA is initialized accidentally.
it is usual work with the tester - first you set all parameters for optimization, then you have tested until the moment when the tester starts to draw horizontal lines on each pass - it is a sign that GA has converged around one local maximum, then you see what parameters GA stopped changing in the optimization tab, and then next optimization runs you make breaking these parameters (which GA does not change) on other intervals, although it is often enough to restart the tester removing (saving) optimization caches - GA will initialize
I think GA should converge to a general maximum (not local) if the step of parameter changes is low enough. If step is too high step change of interval won't help, maxima can slip between neighboring values.
Then it is simpler to do a full enumeration with the lowest step, and then GA, weeding out the worst ranges.
Then it's easier to go full overshoot with the lowest pitch, and then GA, weeding out the worst ranges.
not easier, here is my EA optimised:
one pass of a single test of this EA for 1.5 years takes 1.5-2 seconds, in the optimizer approximately the same speed, you can estimate how much will take a full search, and GA within 20-50 minutes I have already found results that satisfy me
not easier, here is my EA being optimised:
one pass of a single test of this EA in 1.5 years takes 1.5-2 seconds, the optimizer has approximately the same speed, you can estimate how long it takes to do a full run, and GA within 20-50 minutes I have already found results that satisfy me
When the number of passes is displayed scientifically, it's the genetics digit overflow, and it doesn't work at all (if at all). I had to:
1. Reduce the number of steps. To keep the step count from being coarse and to cover the desired range, I made the step count non-linear. Changing the parameter 0.001-0.099, 0.01-0.99, 0.1-9.9, ... I.e. step with an accuracy of about 1%.
2. Reduce the number of variables to be optimised, and this is the main thing.
2а. Split the variables into groups that are (groups) almost independent and optimise separately.
2б. Find variables that depend on another, and link them. I've removed a couple of variables that way, after a very long fuss.
2в. Find variables that can be made constants at the cost of a minuscule decrease in profitability. Also found.
3. and narrow down the range of variables after many evaluations.
Until I made such sacrifices, my optimization was not effective. This is all about optimisation with too many variables. This in itself is wrong, but some experts evolve to simplification rather than complication.
not easier, here is my EA being optimised:
one pass of a single test of this EA in 1.5 years takes 1.5-2 seconds, in the optimizer approximately the same speed, you can estimate how long it takes a complete enumeration, and GA within 20-50 minutes I already find results that satisfy me
This is a matter of system complexity. I was discussing this in the GU thread. I take 3-5 steps for a complete enumeration. You have 18 parameters on the screenshot, it would be (3-5)^18 = 400M to 3.8 trillion, already on pity, though is a lot, and most importantly pairs it is a lot, I do full search not on all, and having fixed some, and in groups, which then already together on GA determined - narrowing GM.
and completely agree with more detailed statement
When the number of passes is displayed scientifically, it's the genetics digitisation that gets overwhelmed, and it doesn't work at all (if at all). I had to:
1. Reduce the number of steps. To keep the step count from being coarse and to cover the desired range, I made the step count non-linear. Changing the parameter 0.001-0.099, 0.01-0.99, 0.1-9.9, ... I.e. step with an accuracy of about 1%.
2. Reduce the number of variables to be optimised, and this is the main thing.
2а. Split the variables into groups that are (groups) almost independent and optimise separately.
2б. Find variables that depend on another, and link them. I've removed a couple of variables this way, after a very long fuss.
2в. Find variables that can be made constants at the cost of a minuscule decrease in profitability. Also found.
3. and narrow down the range of variables after many evaluations.
Until I made such sacrifices, my optimization was not effective. This is all about optimisation with too many variables. This in itself is wrong, but some experts evolve to simplification rather than complication.
When the number of passes is displayed scientifically, it is the genetics digitisation that overflows and it works completely wrong (if it works at all).
What do you mean by 'works completely wrong'?
How can the incorrectness of the work be reproduced?
When the number of passes is displayed in scientific form, it's the genetics digit overflowing and it doesn't work at all (if at all).
with me GA works unambiguously, I write in EA settings file of successful passes, I create file name from MD5 hash of input optimization parameters themselves, i.e. during optimization I see files appear in Common folder
I have only one problem so far - in some time GA may start to converge around a small group of optimization parameters - I think this is normal, all GAs work this way and it is a problem of their use
But unambiguously GA works and does not hang - I see it in the added files and unique names
What do you mean "it doesn't work at all correctly"?
How can I reproduce the malfunction?
I wrote about this a long time ago, when I used frames in EA. I don't remember the exact point anymore, I think I started to have not all frames coming (with better results). I'll look for old posts and try to clarify.
But I distinctly remember it was clearly reproducible in my Expert Advisor - as soon as the number of overshoots exceeded a certain number and it was displayed in the scientific form, my genetics broke down. The important thing was not only a big number of steps of a variable, but also a big number of variables.