[Archive!] Pure mathematics, physics, chemistry, etc.: brain-training problems not related to trade in any way - page 557

 

It seems to work for the flat picture. It seems to me that it will work for the N-dimensional case as well. Any objections?

It seems to me that it's time to write a script and check it... :)

 

Well, the vector normal to the given unit can be constructed easier - just replace any of its coordinate xi by -sqrt(1-xi^2). This is actually equivalent to a 90-degree rotation in the vector's plane away from the i-th axis (i.e. we replace cosine by -sine, we get cosine of angle+pi/2). After that all that remains is to normalize the result.

But it is not a fact that one can get a vector normal to all the others in the set in this way at all. As well as any other way which selects one normal vector from all variants...

 
alsu:

Well, the vector normal to the given unit can be constructed easier - just replace any of its coordinates xi by -sqrt(1-xi^2). This is actually equivalent to rotating 90 degrees in the vector's poles away from the i-th axis (i.e. we replace cosine by -sine, we get cosine of angle+pi/2). After that all that remains is to normalize the result.

But it is not a fact that one can get a vector normal to all the others in the set in this way at all. As well as any other way which selects one normal vector from all variants...

Exactly. Not in the general case.

--

I seem to have found a quick solution. When I sat down to write and relaxed about the algorithm, it just popped up.

So, from generated vector x1r, just subtract its projection on x0 (i.e. x0*sp(x0,x1r), where sp() is scalar product).

In short the formula is in one line: x1 = x1r - x0*sp(x0,x1r);

:))

 
In short the formula in one line: x1 = x1r - x0*sp(x0,x1r);

:))

Yep. But you have to approximate it afterwards.

Well anyway in one line: x1 = norm(x1r - x0*sp(x0,x1r))

 
MetaDriver:

It seems to work for the flat picture. It seems to me that it will work for the N-dimensional case as well. Any objections?

Looks like it's time to write a script and check it out... :)

You don't have to write it. Proof:

sX = (a1+b1,a2+b2); dX = (a1-b1,a2-b2);

Since vectors a and b were unit vectors, the corresponding parallelogram is a rhombus and its diagonals are perpendicular, so the sum and difference are orthogonal to each other. Hence the exchange of modules is equivalent to turning one of them 90 degrees to the right and the other to the left. In Cartesian coordinates we can express this simply by rearranging the first and second coordinates:

sXtr = (a2-b2,a1-b1); dXtr = (a2+b2,a1+b1)

=>

sXtr+dXtr = (2*a2,2*a1) is a vector with modulus 2 and obviously orthogonal to vector (a1,a2), that is vector a. H.t.c.

 
All that's left is to run in a loop. for(i=0; i<InpVectorCount; i++) {....}
 
MetaDriver:
All that's left is to run in a loop. for(i=0; i<InpVectorCount; i++) {....}
Nah, it still comes down to initially choosing the right "arbitrary" vector, which will eventually give orthogonality to the whole set. I.e. arbitrary turns out to be not even arbitrary at all. Again we come back to how to calculate the required initial vector.
 
alsu:
Nah, it still comes down to initially choosing the right "arbitrary" vector, which will give orthogonality to the whole set. I.e. arbitrary is not even arbitrary at all. Again we come back to how to calculate the required initial vector.
Bullshit. Just checking at every step. If at any step of transformation a vector becomes contiguous with the next vector in the set, we generate an initial random vector again and repeat the procedure. Since this case is unlikely (with finite dimension), it is easier (cheaper) than immediately suspecting it to be linearly dependent (looking for matrix rank).
 
MetaDriver:
Bullshit. Just checks at each step. If at any step of the transformation a vector becomes contradictory to the next vector in the set, we generate the initial random vector again and repeat the procedure. Since this case is unlikely (with finite dimension), it is easier (cheaper) than immediately suspecting it to be linearly dependent (looking for matrix rank).
It can become not co-directed, but simply at an indirect angle to all or some vectors
 
MetaDriver:
Bullshit. Just checks at each step. If at any step of the transformation a vector becomes contradictory to the next vector in the set, we generate the initial random vector again and repeat the procedure. Since this case is unlikely (with finite dimension), it is easier (cheaper) than immediately suspecting it to be linearly dependent (looking for matrix rank).
And it is cheaper because the scalar product still has to be calculated (by the algorithm). If it turns out to be equal to one, we reset to the beginning and repeat the process. I.e. at each step, just check if(cp(a,b)! = 1.0) {}