[Archive!] Pure mathematics, physics, chemistry, etc.: brain-training problems not related to trade in any way - page 561

 

I'm asleep, I'm asleep.

 
There seems to be a standard procedure for constructing an orthonormal set. Whether Lagrangian or something else. All right, if you have solved the problem and proved it, then what is there to talk about...
 
Mathemat:
There seems to be a standard procedure for constructing an orthonormal set. Either Lagrange or something else. All right, if you have solved the problem and proved it, then what is there to talk about...

No no no!!! If you've got one, go ahead and post it! Very interesting, I haven't found one.

Interested in different ways, as it may affect the speed of solving the target problem.

 
tara:

Makes sense. What are we going to make money on?

Forex!
 
MetaDriver: Nenenyenne!!! If there is, go ahead and post it! It's very interesting, I looked for it and couldn't find it.

Well I haven't really delved into it yet. The Gram-Schmidt process, it's in linear algebra. Or quadratic forms.

As far as I understand it, it's enough to start with and not the first step. There's a proof there too, and geometric interpolations.

I have this suspicion that there should be something native for this process in OpenCL functions.

 

The method of serial orthogonalisation can be seen in the piece of code below. Gradient is a random vector from which the projections on the basis vectors are removed. The basis is located in a one-dimensional array Sarray . All arrays are declared as global. The process is, I think, clear from the comments.

//+------------------------------------------------------------------+
//|  в массиве Sarray записан ортонормированный базис из k векторов  |
//|  размерности NeuroCellDim каждый                                 |
//|  строится ортогональная проекция вектора Gradient на базис       |
//|  в случае успеха проекция нормируется                            |
//|  и добавляется в базис, если позволяет признак записи: write > 0 |
//+------------------------------------------------------------------+
int OrthogonalProjection(int k, int write)
{
   int   i, j, err = 0, index = 0;
   double Alpha;
   if (k > 0) // если k>0, то базис не пуст
      for (i = 0; i < k; i++)
      {
         for (j = 0; j < NeuroCellDim; j++)
         {
            InputX[j] = Sarray[index]; // извлекаем i-й орт в InputX
            index++;
         }
         Alpha = ScalarProduct( Gradient, InputX); // скалярное произведение, cos(Gradient, InputX)
         if (Alpha > 1.0 - delta)   // если cos() = 1
         {
            err = -1; // считаем, что такой вектор линейно зависим от векторов базиса
            break;
         }
//       Gradient := Gradient - Alpha * InputX     с нормировкой       
         AddVektors( Gradient, InputX, -Alpha); // орт к базису из i+1 векторов
      }

   if (err >= 0 && write > 0) // если существует проекция и позволяет признак записи
      for (j = 0; j < NeuroCellDim; j++)  // записываем новый орт в базис
      {
         Sarray[index] = Gradient[j]; 
         index++;
      }      

   return(err);
}
//+--- OrthogonalProjection End -------------------------------------+
 

People, give me a hint. I'm lost. Here's the problem: There is a sample of data that is very well approximated by a linear regression (independent variable - reference number).

The graph shows a linear regression equation. I want to transform the data from the sample so that it is invariant with respect to the counts. I tried to select a free term of the equation by arithmetic operations and transform data from the sample to this value. But in the beginning there appeared such a peak at the level of 0.7, 0.46 etc. going to the asymptote of the needed level. Where did this peak in the beginning come from? Can it be removed by changing the formula?

Excel is attached just in case.

Files:
 
alexeymosc:

People, give me a hint. I'm lost. Here's the problem: There is a sample of data that is very well approximated by a linear regression (independent variable - reference number).

The graph shows a linear regression equation. I want to transform the data from the sample so that it is invariant with respect to the counts. I tried to select a free term of the equation by arithmetic operations and transform data from the sample to this value. But in the beginning there appeared such a peak at the level of 0.7, 0.46 etc. going to the asymptote of the needed level. Where did this peak in the beginning come from? Can it be removed by changing the formula?

Excel is attached just in case.

OK, if there are no other solutions, I searched for a solution to find the value of the free term of the equation, minimizing R^2 already by the new series (derived from the original and which should be invariant), that is, I made it almost linear. Apparently the problem is that the original data is not perfectly linear...
 
alexeymosc:

People, give me a hint. I'm lost. Here's the problem: There is a sample of data that is very well approximated by a linear regression (independent variable - reference number).

The graph shows a linear regression equation. I want to transform the data from the sample so that it is invariant with respect to the counts. I tried to select a free term of the equation by arithmetic operations and transform data from the sample to this value. But in the beginning there appeared such a peak at the level of 0.7, 0.46 etc. going to the asymptote of the needed level. Where did this peak in the beginning come from? Can it be removed by changing the formula?

Excel is attached just in case.

Well, it means only one thing - the relative error of approximation is the bigger, the smaller X (and Y) is, but what do you expect, dividing a small number by another small number? Try changing the variable X' = X+100 and plot a new series in the range from 100 to 400, not from 0 to 300 - the graph will be much straighter, but it won't change the matter
 

1.

Mislaid 10.03.2012 05:46Method of consistent orthogonalisation can be seen in the piece of code below. Gradient is a random vector

Mathemat:

1. Well, I haven't really gotten into it yet. The Gram-Schmidt process, they teach it in linear algebra. Or quadratic forms.

As far as I understand, it's not enough to start with the first step. There's a proof there too, and geometric interpolations.

2. I have a hunch that there should be something native to OpenCL functions for this process.

1. 1. To Mislaid, Mathemat,

Both here and there it's the same everywhere - the same process I engineered myself yesterday. Consecutive subtraction of vector projections on previous orthos.

It's days like this that make me feel like a classic.... :-))

--

By the way, I already made and debugged my test script last night. I also found a bug in optimizer and sent it to servicedesk. I bypassed the bug by slightly changing the code. So everything works. Reliable and fast, just the way I needed it.

2. There really is one in OpenZL, but only for the three-dimensional case. [cross(a, b); builds a vector orthogonal to two given ] I need it for arbitrary dimension.