[Archive!] Pure mathematics, physics, chemistry, etc.: brain-training problems not related to trade in any way - page 561
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I'm asleep, I'm asleep.
There seems to be a standard procedure for constructing an orthonormal set. Either Lagrange or something else. All right, if you have solved the problem and proved it, then what is there to talk about...
No no no!!! If you've got one, go ahead and post it! Very interesting, I haven't found one.
Interested in different ways, as it may affect the speed of solving the target problem.
Makes sense. What are we going to make money on?
Well I haven't really delved into it yet. The Gram-Schmidt process, it's in linear algebra. Or quadratic forms.
As far as I understand it, it's enough to start with and not the first step. There's a proof there too, and geometric interpolations.
I have this suspicion that there should be something native for this process in OpenCL functions.
The method of serial orthogonalisation can be seen in the piece of code below. Gradient is a random vector from which the projections on the basis vectors are removed. The basis is located in a one-dimensional array Sarray . All arrays are declared as global. The process is, I think, clear from the comments.
People, give me a hint. I'm lost. Here's the problem: There is a sample of data that is very well approximated by a linear regression (independent variable - reference number).
The graph shows a linear regression equation. I want to transform the data from the sample so that it is invariant with respect to the counts. I tried to select a free term of the equation by arithmetic operations and transform data from the sample to this value. But in the beginning there appeared such a peak at the level of 0.7, 0.46 etc. going to the asymptote of the needed level. Where did this peak in the beginning come from? Can it be removed by changing the formula?
Excel is attached just in case.
People, give me a hint. I'm lost. Here's the problem: There is a sample of data that is very well approximated by a linear regression (independent variable - reference number).
The graph shows a linear regression equation. I want to transform the data from the sample so that it is invariant with respect to the counts. I tried to select a free term of the equation by arithmetic operations and transform data from the sample to this value. But in the beginning there appeared such a peak at the level of 0.7, 0.46 etc. going to the asymptote of the needed level. Where did this peak in the beginning come from? Can it be removed by changing the formula?
Excel is attached just in case.
People, give me a hint. I'm lost. Here's the problem: There is a sample of data that is very well approximated by a linear regression (independent variable - reference number).
The graph shows a linear regression equation. I want to transform the data from the sample so that it is invariant with respect to the counts. I tried to select a free term of the equation by arithmetic operations and transform data from the sample to this value. But in the beginning there appeared such a peak at the level of 0.7, 0.46 etc. going to the asymptote of the needed level. Where did this peak in the beginning come from? Can it be removed by changing the formula?
Excel is attached just in case.
1.
Mislaid 10.03.2012 05:46Method of consistent orthogonalisation can be seen in the piece of code below. Gradient is a random vector
1. Well, I haven't really gotten into it yet. The Gram-Schmidt process, they teach it in linear algebra. Or quadratic forms.
As far as I understand, it's not enough to start with the first step. There's a proof there too, and geometric interpolations.
2. I have a hunch that there should be something native to OpenCL functions for this process.
1. 1. To Mislaid, Mathemat,
Both here and there it's the same everywhere - the same process I engineered myself yesterday. Consecutive subtraction of vector projections on previous orthos.
It's days like this that make me feel like a classic.... :-))
--
By the way, I already made and debugged my test script last night. I also found a bug in optimizer and sent it to servicedesk. I bypassed the bug by slightly changing the code. So everything works. Reliable and fast, just the way I needed it.
2. There really is one in OpenZL, but only for the three-dimensional case. [cross(a, b); builds a vector orthogonal to two given ] I need it for arbitrary dimension.