[Archive!] Pure mathematics, physics, chemistry, etc.: brain-training problems not related to trade in any way - page 537
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
... Where does the number 6 come from? Because there are six neighbours?
Isn't that about the six handshake rule? Each dot has six neighbouring dots - six closest acquaintances.
The problem in this formulation is standard for a neural network - it minimizes the MOC error on the sample. In this case, there is a three-input linear perseptron with a bias on the third input. This is essentially a numerical iterative solution method. How do you tie Gaussian here (or not)?
It is possible not to bother with NS in this case and solve the problem by simple enumeration of coefficients a,b,c minimizing sampling error.
Shame on me, I don't understand the logic behind your solution... Where does the number 6 come from? Because there are 6 neighbours?yosuf: Во первых, Гаусс присутствует здесь с самого начала, поскольку он и придумал МНК, а во вторых, решение нормальных уравнений, полученных методом МНК Гаусса, производится уже другим методом Гаусса решения этих уравнений с помощью матриц.
What the hell matrices, Yusuf?! A system of three linear equations with three unknowns. You can solve it without any Gaussian. What's the problem?
You don't need any approximations, just solve it by Cramer's formulae, if you want to tinker with matrices so much. Remember "ruler", it's used a lot...
What the hell matrices, Yusuf?! A system of three linear equations with three unknowns. You can solve it without any Gaussian. What's the problem?
You don't need any approximations, just solve it by Cramer's formulae, if you want to tinker with matrices so much. Remember "ruler", it's used a lot...
What your "direct method" is, I probably can't handle it. Probably another mathematical revolution, only now in linear algebra.
P.S. I think I'm beginning to guess: it's (18) again.
What is your "direct method" I probably can't handle. Probably another mathematical revolution, only this time in linear algebra.
P.S. I think I'm beginning to guess: it's (18) again.
(18) will soon shake the foundations of Gaussian MNA in linear regressions.
The main thing is that it will not shake the fundamentals of DNA.
It's about the six handshake rule, isn't it? Each point has six neighbouring points - six closest acquaintances.
No!
Six, I mean, is not the closest circle of a node, it's the shortest average distance capable of connecting two arbitrary nodes.
No!
Six, I mean, is not the closest circle of a node, it is the shortest average distance capable of connecting two arbitrary nodes.
Two parameters. One is the number of closest acquaintances. The second is the closeness (distance) of acquaintance (how many handshakes you have known each other).
If the closest acquaintances are six, then just the honeycomb fits, and the closeness of the acquaintance is determined by the size of the grey cell.
What the hell matrices, Yusuf?! A system of three linear equations with three unknowns. You can solve it without any Gaussian. What's the problem?
Alexei, as I understand it, this is a system of infinite (or greater than three) linear equations with three unknowns. In this formulation, it is incorrect to pick out only three equations and look for a regular solution. We have to find such a solution for the equation's coefficients that they satisfy the entire vector X and Y with minimum error. This has its own methods of finding the optimal solution.
Two parameters. One is the number of closest acquaintances. The second is the closeness (distance) of acquaintance (how many handshakes you have known each other).
If there are six closest friends, then cells are appropriate, and the closeness of friendship is determined by the size of the gray cell.