Ambitious ideas !!! - page 5

 

Andrei01
:

You are mistaken because you do not know the simplest things.

The data of any array is stored in memory in a linear sequence. From the first to the last one, and to address an element x[15], the compiler will calculate the address of the beginning of the array plus shift 15 to calculate this variable's address. With a two dimensional array, for example, x[2][5], first calculate the offset for the second row and then add to it the offset for the 5th element, that is twice as many operations.

x[2] [5] --> x[((2-1)* ArrayRange(x[0])+5)]

x[15] --> x(+15)

this is all at compiler level, but ArrayRange(x[0]) for a static array is ALWAYS constant, it doesn't need to be computed all the time, just once and saved at compile time

Are you practicing assembler? why these problems? if you're doing assembler, alas - I have not seen any russian documentation for the RIGHT loading of the instruction pipeline on processors older than Pentium-I, and the RIGHT loading of the processor is engaged not even by developers of compilers, but the developers of the OS and the processor architecture

If you are worried that an addition operation will take longer to execute in processor clock cycles than an addition operation, alas, that ship has sailed with the 486th processor, loading of caches and instruction pipelines takes more time than arithmetic and logical operations.

SZZY: I appeal once again - start reading primary sources, here https://www.mql5.com/ru/docs/basis/types/classes developers mql5 describe how to organize the alignment of data, the same info is for all compilers, there is information on how to properly use the call system functions of Windows, etc. - I am writing this to say that there is little Russian documentation that corresponds to modern capabilities of the OS and processors, and the old stuff - the stuff they teach in colleges and universities - does not correspond to reality at this stage of development of the OS and hardware

 
IgorM:

x[2] [5] --> x[((2-1)* ArrayRange(x[0])+5)]

x[15] --> x(+15)

this is all at compiler level, but ArrayRange(x[0]) for static array is ALWAYS constant, it does not need to be computed all the time, it is enough to calculate and save once at compile time

Only the address of the first element is calculated at the compilation stage. All other elements will be counted in counting mode through the offset, which is different each time.

For a two-dimensional array, you need to count two offsets by columns and by rows multiplied by the row number, and of course in the counting process as well. Asembler and compiler have absolutely nothing to do with it, it is just a basic memory addressing for correct use of computing resources in practice.

From this you can easily see that even if there is such a big performance loss between one-dimensional and two-dimensional arrays, the addressing time is all the more significantly slower in more complex cases, e.g. with objects.

 
Andrei01:

Only the address of the first element is calculated at compile time. All other elements will be counted in counting mode via an offset, which is different each time.

For a two-dimensional array you need to calculate two offsets by columns and rows multiplied by the row number and in the counting process too, of course. Asembler and compiler have absolutely nothing to do with it, it's just a matter of basic memory addressing for correct use of computing resources in practice.

From this you can easily see that even if there is such a big performance loss between one-dimensional and two-dimensional arrays, the addressing times are all the more significantly slower in more complex cases, e.g. with objects.


successes in understanding what is going on and the loss of productivity

I have no problem with optimising compilers and writing my own data access designs

SZY: objects are not complex cases - all manipulations to create links - all at the level of the compiler, alas, the processor doesn't care object or not, it has no problem with computing shifts relative to aligned data, even if the "miracle programmer" saves memory - writes data in arrays like byte, but doesn't look at the documentation to the compiler, the effectiveness of this code will be visible only in a reflection of a smug physiognomy of the programmer in the mirror, but in reality it's a fake

 
IgorM:


everything is at compiler level, alas the processor doesn't care whether it is an object or not, it has no problem calculating offsets relative to aligned data

It seems to have just been explained to you on a simple example how much the two-dimensional array relative to the one-dimensional one will slow down during the program execution, not during compilation. I see no point in repeating myself. So you see that you don't take the task of writing more or less optimal calculation code too much of your time, and perhaps you don't need it. In this case OOP is created right for you. :)

 

You're thinking in amoeba categories :) .

"We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%".

This is the fourth time I've been quoted in this forum.

 
Andrei01:

It seems to have just been explained to you on a simple example how much slower a two-dimensional array relative to a one-dimensional one will be in the process of program execution, not compilation. I see no point in repeating myself. So you see that you don't bother to write more or less optimal computational code, and perhaps you don't need it. In this case OOP is created right for you. :)


What kind of optimal code are we talking about? You have absolutely no idea how compilers and virtual machines work

The programmer does not need to find out how access and addressing to physical memory elements in each particular compiler is done (even if diagonally, not in column - nothing can be done here) - it is the task of developers, if the programmer is not satisfied with the code - he will optimize his code:

- by increasing the code's size and decreasing the data size and losing calculation speed

- by increasing the size of the data in the code and getting higher speed

- alternatively, he/she uses a different compiler

ALL OPTIONS are gone!

OOP is another branch for writing EFFICIENT code, the effectiveness of OOP is that the programmer can create a mathematical model in the form of some kind of architecture - thus achieve great versatility of your code, if you think that the classes have another type of addressing for physical access to data - you are mistaken, that microscopic extra amount of data - a linked object table will not in any way increase the access time to physical data in memory and the extra data will be more than offset by multi

I'm shocked - you started to shit on OOP, and then shifted to arguments about addressing in multidimensional and one-dimensional arrays - did you study this somewhere, or just - all speculations and fantasies?

Work with multidimensional arrays has long been implemented at iron level - the same Z-buffer when working with video cards, ah, yes, "sheep, the iron developers" - haven't consulted you and haven't learned how effective addressing of multidimensional arrays is, and not consulting you - all programmers use multidimensional arrays without thinking, and don't look for happiness in increasing code for the sake of imaginary efficiency when working with unidimensional arrays

 
Andrei01:
Does the reliability of information depend on who is presenting it? Any sensible person should understand that information is objective, not subjective. :)
And any person who sets out to understand the issue will understand that information, like, incidentally, its quantity, is a subjective thing, not an objective one:))
 
The efficiency of modern (especially 64-bit) programs is determined to a greater extent by their development environments and compilers. Modern programs are less dependent on the CPU's performance and their code's efficiency. All those who would like to know why it is so and not vice versa, please see E. Tanenbaum's monumental work "Computer architecture", especially chapter 5, section "Intel IA-64". Any hackneyed procedural code in old compilers will not give you such a performance gain compared to a simple transition to a normal development environment. What to say, take assembler for example. Yes, it's a thing. Undoubtedly it will live forever. But I doubt that you'll be able to write in assembler IA-386 code which will surpass usual IA-64 code in performance, using modern hardware resources, such as multicore processor, extended instruction sets and so on. That's why one conclusion follows - we must write in what we're given. If they gave us .NET, we must write in it. Let thousands of other programmers think how to increase performance of CIL-code, how to parallelize threads, etc. etc. Same thing with MQL4. Its time has passed, we have MQL5. MetaQuotes will support it. They will think about how to increase the performance of their language.
 
IgorM:


if the programmer is not satisfied with the code, he/she optimizes his/her code:

- by increasing the size of the code and decreasing the size of the data and losing computational speed

- by increasing the size of the data in the code and gaining more speed

- alternatively uses a different compiler

ALL OPTIONS are gone!

Code optimization requires the programmer to have a minimum understanding of how resource-intensive a code fragment will be in terms of elementary operations performed (addition, multiplication, memory accesses, address calculations, etc.). Without this, no optimization is possible in principle, and even the best compiler will be powerless against this poor programmer. It seems an obvious thing, but I see that this may be big news for many programmers, too. :)

 
alsu:
And anyone who sets out to understand the issue will realise that information, like quantity, is a subjective thing, not an objective one:))

Well you have to confuse and mix into a rattlesnake mix different things. :)

One is a source of information, which is objective, and the other is a receiver, which is subjective because it is not always able to perceive all the information, but only part of it.