The splendour and poverty of the PLO - page 5

 
meat:

Here, as I understand it, the index is defined via a binary search?

No, direct access as in an array.

__________

Maybe I was wrong, I'll think it over.

In general, no one prevents you from creating an array for the whole type's size and getting constant access. (switch works only with integral types).

In the case you described, it is more convenient to enter an enumeration.

 
TheXpert:

No, direct access as in an array.

__________

Maybe I overreacted, I'll think it over again.

Actually no one prevents you from creating an array for the whole type's size and getting constant access. (The switch works only with integral types).

In the case you have described, it would be more convenient to create an enumeration.

For the whole value of the type? No way! In this case it will require 16 Gb of memory (for an array of int type). And what's the point in taking the whole value? Calculating the difference between the maximum and minimum values will suffice. But this is a questionable case anyway, because when the values are large, we must first negotiate with the user how much memory he is willing to allocate for the program. This is why it is suitable only for small key values (or rather for a small difference between the maximum and minimum). This leaves only binary search.

 
meat:

This leaves only the binary search.

No, you don't really need to. In short, if you need to map a number to a number, you need a binary search. If working with a number and mapping the number to a number is enough, then you need a constant search.

I understand about memory )), that's why I wrote that I overdid it.

 

there is always room for the question - Why? compare the spread chart online and in the tester. The tester has nothing to do with reality...

tol64:

Have you already given a more detailed explanation (proof) somewhere?

You have to back up your statements with proof, otherwise you won't even look at it. ;)

 
C-4:
Guys, read the switch documentation. A good switch is a switched transition whose performance does not depend on the number of choices. 1 choice, 100 or 1000 - its transition rate will be constant.
Wah thanks, good reference, read it with pleasure and benefit.
 
dimeon:

there is always room for the question - Why? compare the spread chart online and in the tester. The tester has nothing to do with reality...

To get attention. Open a new thread and cover your question in more detail. Show how in real and how in the tester. Offer your solution to this problem. Otherwise it will remain "without a chance and without options". )
 
Vinin:

The evidence will come from the other side. Or again just words.

By and large, I'm only interested in facts.

Although I already know that OOP is slower, but it provides quite concrete conveniences.

As promised, I'm laying out profiling results of one project. (Please forgive me, but some functions are blacked out, because the code is not for general public).

To begin with I will say that this is a real OOP-project, with strong transformation of source data. The idea of using OOP in it is taken to the absolute. For example, it doesn't use global variables, arrays, functions outside of classes at all - because they are not OOP enough. For it to work, we need the history of orders and deals performed over the entire period. Parsing of 6014 deals and 6599 orders takes only 3.1 seconds or 0.25 milliseconds per transaction and the RAM to deploy all deals, orders and positions requires about 13 MB, or on average 1 kilobyte per transaction. - I think this is a very good result for an OOP application:

2014.07.07 12:44:33.464 TestMA (AUDCAD,H1) We are begin. Parsing of history deals (6014) and orders (6599) completed for 3.104 sec. 13MB RAM used.

But let's take a look at the structure of the time taken when initializing the application:

We see that most of the time is spent on calling the AddNewDeal function. This is a composite function and the real work is delegated to RecalcValues (57%). It in turn consists of system functions such as HistoryOrderGetInteger:

Note that the call times of these functions are approximately equal.

Note that this is the end of the whole functions conveyor. Before you get to these calculations you need to pass another dozen of intermediate OOP-methods, and some of them are virtual as well. But their running time is negligible and in the profiler they are in the second half of the list.

As a 100% OOP application it's very easy for me to track the time-critical code sections, and I can very effectively find new ways to improve performance. I already know that the remaining part (43%) is 80-90% made up of calls to CArray.Resize(). There are some places where the code is not optimized and array repartitioning occurs more often than necessary. I could easily rewrite these OOP modules and improve performance by 25%-30%. Without OOP it would be harder to do because each function is potentially involved in an infinite number of interrelations and it becomes much more difficult to calculate the consequences of changes in such a function.

As the result it turns out that even a complex OOP-project can be brought to the limit of performance of basic system functions. But without OOP it will be more difficult to achieve such productivity, because there will be so many functions that sooner or later you will make a mistake: you will make either unnecessary calls or non-optimized twins, or too complex and cumbersome implementations.

 
dimeon:

there is always room for the question - Why? compare the spread chart online and in the tester. In the tester it has nothing to do with the reality...

Forum on trading, automated trading systems and strategy testing

The glory and squalor of OOP

tol64, 2014.07.07 09:12

To get attention. Open a new topic and cover your question in more detail. Show how it is in real life and how it is in the tester. Suggest your own solution to this problem. Otherwise it will remain "without a chance and without options". )

+++

todimeon - Open a thread, you'll learn lots of arguments why it can't and why it should.

 
C-4:

As promised, I'm posting the results of profiling one project. (No disrespect, but some functions are masked as the code is not for general public).

...

What's the point of all this? You haven't cited codes of your functions (unless you count some torn fragment). So what's there to discuss? This thread is specifically about comparing the performance of OOP and procedural programming. And the fact that your secret functions supposedly perform some work, delegate something somewhere, take some time, and you masterfully handle all this - of course, we are incredibly happy for you, but what good will this information do, if we do not see the codes.

 
meat:

What is the point of all this? You have not cited codes of your functions (unless you count some torn fragment). So what to discuss? The topic here is specifically about comparing the performance of OOP and procedural programming. And the fact that your secret functions supposedly perform some work, delegate something somewhere, take some time, and you masterfully manage all this - of course, we are incredibly happy for you, but what good will this information do, if we do not see the codes.

He showed that direct call or virtual call have no effect in real projects.

By the example of profiling a real OOP-project I will show that its performance at the limit tends to the system function call performance

The vast majority of costs are incurred in system functions call, where most of the time MQL programs spend. The costs of arranging the calls are negligible compared to the payload.