"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 79

 

About sparse matrices correctly noticed, if the matrix is sparse then the model is not very effective, but here you can apply an array of indices, will work a little slower than the direct implementation, but still at a level, and for the GPU no difference sparse or fully coupled all the same add flies.

Hopfield network no problem at all, what is not clear how to implement it ask (because I do not understand how such a simple example causes difficulties).

I didn't miss anything, did I?

ZZZ generally quite a long time thinking, and this implementation X knows what, thinking out of what would be impossible to implement this model, when I could not come up with decided to ask for help hall.

 
Urain:

Changing activation or some other process can be described by dynamic strings, MetaDriver has experience.

I confirm: this is the least of the problems outlined.

Regarding sparsity.

As I understand it, at the level of network description, it's fine and even quite convenient.

Just need a compressor: a "packer" into a compact description, which, in turn, is already "food" for subsequent compilation into source in mql and/or OpenCL.

// Well, or for a dynamic "universal mesh" configurator, if it still has supporters...

 

Regarding sparsity (I wrote above), you can create an index array for each neuron based on the mask, which, after counting, will indicate where to place the outputs. And, in general, you can take up the general discussion. The main thing now is to adopt in general, the reference model.

This model is easy to understand, that is why it will be easy to write learning algorithms for it.

For GPU, the original model is still better (imho).

 
TheXpert:

Then why do you have the mask attached to the output and not to the neurons again?)

And how do you want to cram the activation function into the GPU?

Imho, like last time, you're going to cram what you can't cram. But this is imho, so feel free to screw it up.

I won't bother you anymore, unless it's business.

Ah, the cognitron. What else -- the hopfield network -- there the input is the output. There's also sparse...

Sparse coding is great. I was messing around with it half a year ago, looking for patterns in the market. It works. And it seems to generalize. But as it turned out according to my experiments, past price patterns are not repeated in the future. I applied that method to search for the patterns in images and sounds. It worked quite well. I have made a report to my colleagues who have been trying to do the same thing for the last 3 years, using spike neurons. Everyone was fascinated. Two Ph. D. candidates wanted me to become their advisor (and, of course, to let me use my results in their dissertations). One offered to write an article for a journal. It took me only a couple of weeks to write the code, run it on different data and write a report. In short, rarefied coding has a lot of potential, but probably not in the market, but where there is some structure.

By the way, about the above-described universal NS model I have little to say yet, because I don't understand how neurons are split into layers, how they are assigned the function of processing inputs and how connections are established.

 
gpwr:

Sparse coding is awesome! I was messing around with it half a year ago, looking for patterns in the market. It does. And it seems to generalize. But as it turned out according to my experiments, past price patterns are not repeated in the future. I applied that method to search for the patterns in images and sounds. It worked quite well. I have made a report to my colleagues who have been trying to do the same thing for the last 3 years, using spike neurons. Everyone was fascinated. Two Ph. D. candidates wanted me to become their advisor (and, of course, to let me use my results in their dissertations). One offered to write an article for a journal. It took me only a couple of weeks to write the code, run it on different data and write a report. In short, rarefied coding has a lot of potential, but probably not in the market, but where there is some structure.

By the way, about the above-described universal NS model I have little to say yet, because I do not understand how neurons are divided into layers, how they are assigned the function of processing inputs and how connections are established.

In the proposed model, neurons are not limited to layers at all, i.e. any previous neuron can theoretically output a signal to a subsequent one.

But it is possible to introduce restriction!!!, to set layers of the network, and in connection with these layer rules to check the mask (it will not affect the algorithm, but there will be additional check at loading).

Then by the way the GPU can be fed not to individual neurons, but to packs of neurons described as a layer. But again, the model itself is not limited by layers, and the question of layerhood is an additional limiting rule (like on-demand stopping), may or may not be.

HI With layerless construction, the top triangle of the mask (behind the inputs) is zeroed out, which describes the absence of feedbacks in the main matrix; when there are layers, zeroing is added with an entry down from the diagonal. Actually, this is the check of the mask.

 
Urain:
In the proposed model, neurons are not limited to layers at all, i.e. any previous neuron can theoretically output a signal to a subsequent one.

But it is possible to introduce a restriction!!!, to set layers of the network, and in connection with these layer rules to check the mask (it will not affect the algorithm, but there will be an additional check when loading).

Then, by the way, the GPU can be fed not to individual neurons, but to packs of neurons described as a layer. But again, the model itself is not limited by layers, and the question of layers is an additional limiting rule (as a stop on demand), may or may not be.

With layerless construction, the top triangle of the mask (behind the inputs) is zeroed, which describes the absence of feedbacks in the main matrix, when layers appear, zeroing is added with an entry down from the diagonal. This is actually what the mask check is all about.

Layers are needed because in some networks different layers have different processing of inputs and different connections of neurons to each other inside the layer. Actually, I don't understand the practicality of the goal of building a universal network. There are many networks with their nuances (functions of neurons in different layers, their connection, training weights, etc.). Describing them all in one model seems to me impossible or inefficient. Why not create a library of different networks?
 
gpwr:
Why not create a library of different networks?
Wo. A GPU for a specific grid would be even faster.
 
TheXpert:
Wo. A GPU for a particular mesh would be even faster.
But. This approach does not even contain the germ for trying to genetically crossbreed topologies.
 
MetaDriver:
But. This approach doesn't even contain the germ for attempts at genetic crossing of topologies.

Now you are wrong. First of all, if the grid outputs cannot be fed to the inputs of another grid, what kind of grid is it?

Second, for the most part, all topologies can be done by implementing layers with functionality for different networks.

Connecting layers with weights (mask and weights themselves) you get any mix of topologies you want. Conversion is done by the neuron, the synapse only transmits the signal.

The main thing is that this mix makes sense.

For genetics, the only necessary and sufficient condition is the possibility to obtain the output and the tuning cloud. I thought about it all back then.

 
Urain:

Regarding sparsity (I wrote above), you can create an index array for each neuron based on the mask, which, after counting, will indicate where to place the outputs.

It seems more logical to take the inputs from where, and let whoever wants take the outputs.