"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 34

 
joo:
I will try to get in touch with him - he disappears for a long time. Of course, it would be great to have a dedicated article on OpenCL with MQL5. Especially now.

How's it going?JavaDev hasn't been on Skype in over a month.

This is a piece of code from him demonstrating howOpenCL works

[05.06.2011 17:29:59] JavaDev:    __kernel void MFractal(                           \r\n"
"                        float x0,                    \r\n"
"                        float y0,                    \r\n"
"                        float x1,                    \r\n"
"                        float y1,                    \r\n"
"                        uint  max,                   \r\n"
"         __global       uint *out)                   \r\n"
"     {//------------------------------------------   \r\n"
"         size_t  w = get_global_size(0);             \r\n"
"         size_t  h = get_global_size(1);             \r\n"
"         size_t gx = get_global_id(0);               \r\n"
"         size_t gy = get_global_id(1);               \r\n"


"         float dx = x0 + gx * (x1-x0) / (float) w;           \r\n"
"         float dy = y0 + gy * (y1-y0) / (float)h;           \r\n"

"         float x  = 0;                               \r\n"
"         float y  = 0;                               \r\n"
"         float xx = 0;                               \r\n"
"         float yy = 0;                               \r\n"
"         float xy = 0;                               \r\n"

"         uint i = 0;                                  \r\n"
"         while ((xx+yy)<4 && i<max) {                \r\n"
"            xx = x*x;                                \r\n"
"            yy = y*y;                                \r\n"
"            xy = x*y;                                \r\n"
"            y = xy+xy+dy;                            \r\n"
"            x = xx-yy+dx;                            \r\n"
"            i++;                                     \r\n"
"      }                                              \r\n"

"      if (i == max) {                                \r\n"
"         out[w*gy+gx] = 0;                           \r\n"
"      } else {                                       \r\n"
"        out[w*gy+gx] = (uint)((float)0xFFFFFF/(float)max)*i;                \r\n"
"    }                                               \r\n"
"   }//--------------------------------------------   \r\n"

 

   int calcOCL() {
      ulong startTime = GetTickCount();
      CL_STATUS status;
      cl_mem data_buf;
      data_buf = ctx.CreateBuffer(CL_MEM_ALLOC_HOST_PTR,CL_MEM_READ_WRITE,m_SizeX*m_SizeY,FLOAT,status);
      if (status!=CL_SUCCESS) {
         Alert("CreateBuffer: ", EnumToString(status));
         return (-1);
      }
      float x0 = -2;
      float y0 = -0.5;
      float x1 = -1;
      float y1 =  0.5;
      uint  max = iterations;
      
      
      kernel.SetArg(0,x0);
      kernel.SetArg(1,y0);
      kernel.SetArg(2,x1);
      kernel.SetArg(3,y1);
      kernel.SetArg(4,max);
      kernel.SetArg(5,data_buf);
      
      uint offset[2] =  {0,0};
      uint work  [2];  work[0]= m_SizeX; work[1]= m_SizeY;
      uint group [2];  group [0] = wgs; group [1] = 1; 
      
      status = queue.NDRange(kernel, 2, offset, work, group);
      oclFlush(queue);
      
      for (int y=0;y<m_SizeY;y++) {
         status = queue.ReadBuffer(data_buf,true,y*m_SizeX,m_SizeX,Line[y].Pixel);
         if (status!=CL_SUCCESS) {
            Alert("ReadBuffer: ", EnumToString(status));
            break;
         }
      }
      oclFinish(queue);
      
      data_buf.Release();
      queue.Release();
      uint endTime = GetTickCount();
      return (int)(endTime-startTime);
   }
   
   uint calcMQL() {
      uint startTime = GetTickCount();
      float x0 = -2;
      float y0 = -0.5;
      float x1 = -1;
      float y1 =  0.5;
      uint  max = iterations;
      uint  w = m_SizeX;
      uint  h = m_SizeY;
      
      for (uint gy =0;gy<h;gy++) {
         for (uint gx =0;gx<w;gx++) {
            float dx = x0 + gx * (x1-x0) / w;
            float dy = y0 + gy * (y1-y0) / h;

            float x  = 0;
            float y  = 0;
            float xx = 0;
            float yy = 0;
            float xy = 0;
            uint i = 0;
            while ((xx+yy)<4 && i<max) {
               xx = x*x;
               yy = y*y;
               xy = x*y;
               y = xy+xy+dy;
               x = xx-yy+dx;
               i++;
            }

            if (i == max) {
               Line[gy].Pixel[gx]=0;
            } else {
               Line[gy].Pixel[gx] = (int) (((float)i/max)*0xFFFFFF);
                 }
         }
      }
      uint endTime = GetTickCount();
      return (int)(endTime-startTime);
   }
};
 
Graff:

How's it going?JavaDev hasn't been on Skype in over a month.

I got in touch with him, JavaDev is keeping an eye on the topic.
 
joo:
I contacted him, JavaDev is following the topic.
Hooray. there is hope that the project will not die :)
 

Lecture 1 here https://www.mql5.com/ru/forum/4956/page23

Lecture 2. Biological methods of information processing

I will briefly leave the principle of discharges in networks and briefly review the essence of biological information conversion. Then I will tie everything together. For an example, consider information conversion in the visual cortex. This topic is far from trading, but it helps to scoop up clever thoughts. By the way, many networks such as Kohonen maps and methods of self-training scales were introduced in attempts to model the visual cortex. So, visual information is converted into electrical signals by retinal photoreceptor cells, then filtered by retinal ganglion cells (RGCs) and then sent to the visual cortex via LGN relay cells, the purpose of which is still poorly understood. The retinal ganglion cells act as band-pass spatial filters, highlighting the contours of objects. The principle of their work is very similar to the edge detection function in Photoshop. It is quite interesting that we perceive the world through the boundaries of objects. In the visual cortex, the filtered image passes through several neural layers with abstruse names and acronyms. There are two channels of visual information conversion: the "what" channel, which performs object recognition, and a parallel "where" channel for localizing objects and perceiving their motion. We are interested in the first channel, consisting of two-dimensional layers V1, V2, V4, and IT, which are organized parallel to the retina (not in space, but functionally). The structure of these layers is quite complex. Electrical signals are transmitted from the retinal ganglion cells to V1, from V1 to V2, etc. The cells of one layer take their inputs from the cells of the previous layer (direct signal propagation), as well as from their neighbors (intralayer recurrent connections). There are also recurrent connections, which are often neglected due to their poor understanding. Transformation of information in the visual cortex can be represented graphically in the following simplified form:

Simple cells S1 (simple cells S1) lie in V1. They are filters of elementary fragments of contours (borders of objects), that is short segments of lines with different angles of slope, different lengths, different polarity (light line on dark background, dark line on light background), and different location in two-dimensional space. Each S1 cell essentially "looks at" a certain section of the image through a narrow "slit" of a certain inclination and length, and reacts only when the contour in that section coincides in inclination, length and polarity with the "slit".

Complex cells C1 also lie on the V1 layer. Like the S1 simple cells, they respond to short segments of contours of a particular slope and length in a particular part of the image, but are less sensitive to shifts in parallel to these segments (shift invariance).

Simple cells S2 lie in layers V2 and V4. They are spatial filters of more complex shapes, consisting of two straight segments of different slope and length (e.g., G, T, L, V, X). They respond to these shapes at different locations in the image.

Complex cells C2 (complex cells C1) lie in layers V2 and V4. They, too, are spatial filters of more complex shapes, which consist of two straight segments of different inclination and length, but are less sensitive to parallel shifts of these shapes.

View cells (or simple cells S3) lie in the IT layer. They react to even more complex shapes of different orientation and size (objects).

Object selective sells (or complex cells C3) are also on the IT layer. They, too, react to objects of different orientations and sizes, but independent of their location.

This multilayer transformation of visual information allows our brain to recognize an object regardless of its location in the image, its orientation and size. Object recognition (classification) takes place in the next layer of the visual cortex, called the prefrontal cortex (PFC).

 
What shall we call the project?
 
TheXpert:
What should we call them?
If the question is for me, in the literature, the networks I described are called hierarchical neural networks.
 
TheXpert:
And you think about the logo :)
The idea is to resonate (befriend) with the logo methaqvot
 
TheXpert:
What should we call it?

Meta Universal Neural Network (MUNN)

Or you don't claim to be universal?

 
Urain:
Claim?
You do. But not that much. Perception is perception. Any network can be thought of as a black box that receives and transforms information from perceptual organs (inputs).
 
Artificial Brain