You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
It's a pity there are no 'pioneers'. I'll keep digging...
I've read such an entertaining article.
I am interested in the principle of structuring the brain as a neural network - first a large number of all sorts of connections are recruited without paying much attention to their quality, then the selection is started according to the principle "remove everything that is not necessary to achieve the result".
It is quite possible that this approach helps to fight both overtraining and "structural failure" so to speak.
In this connection a question: hasn't anyone encountered studies of the similar principle in NS theory.
I've read such an entertaining article.
I am interested in the principle of structuring the brain as a neural network - first a large number of all sorts of connections are recruited without paying much attention to their quality, then the selection is started according to the principle "remove everything that is not necessary to achieve the result".
It is quite possible that this approach helps to fight both overtraining and "structural failure" so to speak.
In this connection a question: hasn't anyone encountered studies of the similar principle in NS theory.
Seen something with 'net thinning' when a small absolute value is detected in the coefficients.
But I haven't seen anything like that in the tens of times, and as a basic principle "big, big surplus, then reduction". Do you want to make one?
I'm interested. Only the computational resources would be insufficient. // I don't know how to adapt the GPU - it is profitable to calculate only the same type of schemes, and here every time the network topology is different.
MetaDriver:
Do you want to get hooked up?
I do. But for now I'm thinking: I need to think it over to figure out which reduction algorithm to use. The first idea is to really thin by absolute value, and make the threshold dependent on the number of examples already presented: the larger the training sample, the harder it should be for the links to survive.
I don't know how to adapt GPU - it's advantageous to calculate only the same type of schemes, but here each time network topology is different.
They are of the same type and are hardware-implemented. Efficiency of parallel computing is greatly exaggerated at all, actually (there are real calculations and even doctoral theses are defended concerning it) in general case it is even slower than sequential; the reason is in time spent on data transfer.
I can't immediately agree. If you have a good understanding of specifics, you can contrive to squeeze hundreds of times more speed out of some of tasks. So for intensive calculations it always makes sense to try to reduce the problem to a class of those "some". If it works, the gain may be huge. If not, then no - count sequentially on an ordinary processor.
--
For example genetic optimization of same-type (with same topology) neural networks with large number of training examples is extremely beneficial (speed of execution is tens or hundreds of times higher). The only problem is that each topology requires a new OpenCL program. This can be solved by constructing basic ocl-templates and generating new ocl-program automatically (programmatically) for a given topology.
Here, by the way. While writing what above, an idea came to mind how to reduce your problem to a class advantageous for GPU calculations. To do it step by step: inside each step we have to read everything in one OCL program, but reduce to zero coefficients (in essence imitate). And for a new step, generate a new program in which the reductions of the previous step have already been "flashed" into the program. And so on. But this is the case if genetic "learning" is used.
I cannot agree with that right away. If you have a good understanding of specifics, you may manage to squeeze hundred times more speed out of some of tasks. Therefore, with complex calculations it always makes sense to try to reduce the problem to the class of those "some". If it works, the gain may be huge. If not, then no - count sequentially on an ordinary processor.
As far as I remember, the critical value is the ratio of execution time of paralleled tasks to the non-paralleled ones (data preparation, data transmission) in the main loop and the higher the parallelism is, the stricter the requirements to this ratio are. That's why we should not only aim at reducing the algorithm to "parallel-file" but at minimizing the non-parallel part as well.
For example, in the now fashionable (and, by the way, implemented in Five) cloud computing the gain limitations are very serious just because of the transfer time. In fairness it should be noted that these limitations will only appear when a cloud network is loaded not less than in tens of percents.
Well, but it's not about that now - there is really not much in the way of parallelism in this task.
There will be no parallelism: templates depend sequentially on each other, so you have to generate them all in advance, but then there will be trillions of them and most of the time will be spent to find the right one at the moment)
There will be no parallelism: templates depend sequentially from each other, so we have to generate them all in advance, but then there will be trillions of them and most of the time will be spent to find the needed at the moment)
Ah, well, I'll duplicate that:
Here, by the way. While writing the above, I came up with an idea how to reduce your problem to a class of advantageous GPU calculations. To do it step by step: inside each step we have to read everything in one OCL program, but reduce to zero coefficients (in essence imitate). And for a new step, generate a new program in which the reductions of the previous step have already been "flashed" into the program. And so on. But this is the case if genetic "learning" is used.