question for #define experts - page 10

 
Алексей Тарабанов:

Come on. First the define, then the executable, then the execution of the executable.

I got it. So, define speeds up only building of big project, but not code execution in executable.

Generally, for self-education, I'm interested in maximum possible implementation of code at the hardware level, in terms of speed of execution.
It doesn't matter what architecture or domain.
I know there are languages that run entirely in memory, which would probably be the fastest solution.
 
Roman:

I got it. So, define speeds up only building a large project, but not code execution in the executable.

Generally, for the sake of self-education, I am interested in the maximum possible implementation of code at the hardware level, in terms of speed of execution.
It doesn't matter which architecture or domain.
I know there are languages that run entirely in memory, this would probably be the fastest solution.

Define gives no acceleration at all. With maybe a small exception of defining prime numbers like #define FIVE 5.

The define is meant to make the code more readable, and easier to modify. That's it. This completes its functionality.

 
I wonder what language is used to write ISS or medical equipment.
Microcontrollers and drivers are written either in C or assembly language.
What other options are there? Has anyone studied Q?
 

IMHO programming stages))

1 - I don't understand anything

2 - I understand procedural style (it's not even a functional one, it's just a bit of code)

3 - I consider that I can write everything I like on the functional style)

4 - I understand that I don't understand OOP

5 - understand OOP

6 - still understand OOP by severity

7 - I can think up any combinations of tree knots in OOP and hash it all and salt it with defects to my taste

8 -all this programming but with strict access by type and so on.

9 - understand that 7 point is evil (the dark side) and there is a world of civil ood (templates)

10 - from those whom I consider cool programmers increasingly hear "OOP is evil", that is, there is a civil use of methods and if all directly on civil to do it turns out that the need for everything is not so great, yes it is routine, but the application speed of code writing several times higher than mine :(

11 - I understand that modern functional languages emulate OOP in some stages, all these extensions of functions, in other words I begin to understand the functional programming approach (point 3)

In general, I don't see much difference from the syntax of the language, the principles do not change, there are strict languages and there are not strict languages. It is hard to code in languages that are not strict. C++ C# R Q Q GO JS Ruby does not make the slightest difference.

Low level is faster (it's realistic to write your own very fast piece of code, faster than the standard, but why?) - for example it is easy to write a sorting faster than the standard, but the essence of timely sorting is not speed))) but a minimum number of actions, and the minimum number of actions is not always the fastest way, but it is very good.

about microprocessors - well, that's usually where you start to learn. Although the essence of the language is not so important. High-level ones allow you to operate with large code fragments, while low-level ones are more versatile.

Документация по MQL5: Основы языка / Синтаксис
Документация по MQL5: Основы языка / Синтаксис
  • www.mql5.com
Синтаксис - Основы языка - Справочник MQL5 - Справочник по языку алгоритмического/автоматического трейдинга для MetaTrader 5
 
No, no, no. No OOP ))
OOP is a wrapper over a procedure language.
That is an extension of a procedural language but with its own paradigm.
Personally, I think OOP is just a littering of a high level language ))
All the listed languages are high level languages, except Q
You must be thinking of a wrong Q. Q is an extension of the K language
The Q language implements the KDB+ database

Kdb+, from Kx , is

  • high-performance cross-platform columnar historical time series database
  • in-memory computational engine
  • real-time thread processor
  • expressive queries and a programming language called q

There's a very different paradigm there, not C-like. I think there's vector logic there.
But I think that even the most experienced programmers on this forum haven't heard of this language)).


w

 

0.0 the evolution of q

Arthur Whitney developed the q programming language and its database kdb+. Released by Kx Systems, Inc. in 2003, q's main design goals were expressiveness, speed and efficiency.
It doesn't come close to matching them. The design trade-off is brevity, which can confuse programmers coming from verbose traditional database programming environments -
such as C++, Java, C# or Python - and relational DBMS. While the q programming gods revel in programs that resemble an ASCII kernel dump, this guide is for all of us.

Q evolved from APL (programming language) which was first invented as a mathematical notation by Kenneth Iverson at Harvard University in the 1950s.
APL was introduced in the 1960s by IBM as a vector programming language, which means that it handles a list of numbers in a single operation.
It was successful in finance and other industries that required a lot of crunch.

Mitochondrial DNA q traces from APL to A to A+ and to k. They have all been well adapted to perform complex calculations quickly on vectors.
What is new about q / kdb+ is that it handles large amounts of time series data very efficiently in a relational paradigm.
Its syntax allows you to " select " expressions similar to SQL 92, and its collection of built-in functions provides a complete and powerful stored procedure language.

There is also some whispering in genes q: the fundamental data construct q is a list. Although the designations and terminology are different, the symbols are taken from their counterparts in the schema.

The pedigree of APL q also shows the influence of functional programming.
In his 1977 Turing Prize lecture, which introduced purely functional programming, Backus acknowledged the inspiration from APL.
Although q is not purely functional, it is strongly functional in the sense that even its basic data structures, list and dictionary, are treated as mathematical mappings.

0.1 philosophy

A skilled q developer thinks differently than in conventional programming environments such as C++, Java, C# or Python, henceforth referred to as "traditional programming". -
To get you into the right mindset, we will summarise some potential discontinuities for beginner q - henceforth known as qbie.

Let's look back at some of the data issues in traditional database programming:

  • A representation in memory-such as a collection of objects-must be mapped to another representation-such as tables-to persist.
    Considerable effort is required to obtain a proper object-relational match.
  • Objects must be mapped to another representation for transport, usually some binary or XML form that smooths out the reference chains.
  • Data manipulation - e.g. selection, grouping and aggregation on large data sets is best done in stored procedures on the database server.
    Complex numerical calculations are best performed separately from the database on the application server.
  • Converting data to display a graphical interface is best done on a separate layer - e.g. HTML5 and JavaScript in the browser.

Most traditional programming design is spent on getting the different views correctly, requiring many lines of code to marshal resources and synchronize the different views.
They are surprisingly simple in q / kdb+.

Interpretable Q is interpreted, not compiled. At runtime, the data and functions are in the workspace in memory.
Iterations of the development cycle are usually fast because all the runtime information needed for testing and debugging is immediately available in the workspace.
Q programs are stored and executed as simple text files called scripts. Interpreter eval and parse routines are available so you can dynamically generate code in a controlled manner.

Types Q is a dynamically typed language where type checking is mostly unobtrusive.
Every variable has a type of its currently assigned value, and type promotion occurs automatically for most numeric operations. Types are checked for operations on homogeneous lists.

When q is entered from left to right, expressions are evaluated from right to left or, as the gods of q prefer, from left to right-this means that the function is applied to the argument to its right.
There is no operator precedence, and the function application can be written without parentheses. Punctuation noise is greatly reduced.

Null and Infinity values in classic SQL are NULL values that represent missing data for a field of any type and do not take up storage space.
In q null values are typed and occupy the same space as non-null values. Numeric types also have infinite values.
Infinite and null values can participate in arithmetic and other operations with (mostly) predictable results.

Integrated I/O is performed using functional descriptors, which act as windows to the outside world.
Once such a descriptor is initialised, passing a value to the descriptor is a write.

The table is oriented towards rejection of items, you who enter here. Unlike traditional languages, you won't find classes, objects, inheritance or virtual methods in q.
Instead, q has tables as first class objects. The lack of objects is not as bad as it might seem at first glance.
Objects are essentially glorified records (i.e. entities with named fields) which are modeled by q dictionaries. A table can be thought of as a list of record dictionaries.

Arranged lists because classical SQL is a set algebra that is unordered without duplicates, row order and column order are undefined,
which makes time-series processing cumbersome and slow. In q data structures are based on ordered lists, so time series maintain the order in which they are created.
In addition, simple lists take up continuous storage, so processing large data is fast. Very fast.

Column-oriented SQL tables are organised as rows spread across the repository, and operations are applied to fields within the row. Q tables are lists of columns in continuous storage, and operations are applied to all columns.

The in-memory database can be considered kdb+ as an in-memory database with continuous support. Since data processing is done with q, there is no separate stored procedure language.
In fact, kdb+ contains serialized lists of q columns written to the file system and then mapped to memory.

 

a young woman has bought a piglet (from the popular wisdom)

I got into this discussion, I have no peace))

I've tested some more variants, MQL5 performs very good runtime optimization, but still .... it's not right to rely on a smart compiler - unnecessary function call is not the best solution,

imho, this is the only way

void OnStart()
{
   int arr[];
   ArrayResize(arr, 100);
   ArrayInitialize(arr, 1);
   int sum = 0;
   for(int i = ArraySize(arr) - 1; i >= 0; i--)
   {
      sum += arr[i];
   }
   printf("sum = %i", sum);//sum = 100
   
   sum = 0;
   for(int i = 0, sz = ArraySize(arr); i < sz; i++)
   {
      sum += arr[i];
   }
   printf("sum = %i", sum); //sum = 100
}

the first loop - search in descending order of the array indexes

lower loop - search in ascending order of indexes

 
Igor Makanu:

a young woman has bought a piglet (from the popular wisdom)

I got into this discussion, I have no peace))

I've tested some more variants, MQL5 performs very good runtime optimization, but still .... it's not right to rely on a smart compiler - unnecessary function call is not the best solution,

imho, this is the only way

the first loop - search in descending order of the array indexes

the lower loop - a search in ascending order of the array indexes

A function is not like a function. And maybe the size of an array is not a good example, because it's equal to a variable in terms of memory usage and access to it. In fact, it's a variable in a memory cell that was filled when the array was declared. But maximum or minimum from an array, or just calculating in a loop condition of 10 variables, you shouldn't. It's better to calculate and substitute first.

And as far as I understand, array elements, structure elements are variables with specific addresses in memory, and it's just a matter of writing convenience, A[10] and 10 variables A1 A2 ... A10 are the same in access and size. The type of course is the same)

 
Valeriy Yastremskiy:

And as far as I understand, array elements, structure elements are variables with specific addresses in memory and it's just a convenience of writing, A[10] and 10 variables A1 A2 ... A10 are the same in access and size. The type of course is the same)

if physically in the CPU command, no

an array is a memory area, access to array items is calculation of the item index from the beginning of this memory area and extraction of data (bytes) according to the stored type


if this is the logic of the algorithm, then yes, they are indexable variables.

in general on the problem under study, the only correct advice is check the input parameters. Save - push the data off the stack. Save registers. Restore registers. Function call - call, then return. What I mean is that a lot may be happening in the background and we cannot see it. That's why it's better to use for (int i=0; i<size; i++) and not rely on the compiler to do what we expect it to do.

 
Igor Makanu:

if physically in the CPU command, no

an array is a memory area, access to array elements is to calculate an index of an element from the beginning of this memory area and retrieve data (bytes) according to the stored type


if this is the logic of the algorithm, then yes, they are indexable variables.

in general on the problem under study, the only correct advice is https://www.mql5.com/ru/forum/354662/page4#comment_19039624:

I agree with that, the compiler is a black box, so you can't assume for sure.

The main thing is that everything conceived works as intended)))