Interesting take on the PLO - page 9

 
Igor Makanu:

I'm 99% sure that these codes will be executed with the same speed up to a tact at the processor level - the processor has optimization, paralleling and whatever else is running at the microcommand level

Don't you get a negative effect here when you write code of any quality relying on "the compiler will brush it up to the optimal one"?


You know for sure that the compiler will do the right thing with one style of writing. With another style, you just have to trust that the compiler is smarter.

Given cross-platform, different compilers, etc., I choose to be aware of what I'm doing in the code.
 
fxsaber:

Doesn't it have a negative effect when any quality code is written with the expectation "the compiler will brush it to the optimal"?

My examples are hardly of any quality, they are typical constructs - I have long been comparing the sources on githab, both tst1 and tst2 examples, both are actively used by programmers

That is why I think developers of compilers learned the standard code constructs long time ago and it is not a problem for compilers.


negative effect - as@TheXpert wrote above, there are company requirements for code formatting, but the requirements are generally the same - the code must be understandable to other team members, including even those who have just come....


fxsaber:

With one style of writing you know for sure, the compiler will do the right thing. With another style you just have to trust that the compiler is smarter.

It's not the compiler that's smarter now, but the processor itself, imho, if we're talking about high-performance code - the main performance overhead is not in function calls, but in memory reads (memory accesses) - if you can replace data/variables storage with small computation values, you will (at the level of processor microcommands optimization) have a small gain

... but all the rest, imho, is evil ))))


SZZY: there is also optimization of a code at a level of the compiler, I read a little - all at a level of a guess, on PC iron I read periodically, I read for a long time, I have written the opinion




fxsaber:

Given cross-platform, different compilers, etc., I choose to be aware of what I'm doing in the code.

I have no choice then - in short: "I'm an artist - that's how I see it" ))) , I hope I didn't offend.

 

I have a rule, after 5 years the code must be understandable to the coder, if not understandable, bad code.

And if understood by others, very good.

 
Valeriy Yastremskiy:

I have a rule, after 5 years the code must be understandable to the coder, if not understandable, bad code.

And if understood by others, very good.

Here ( and here) is very good code. But I don't understand it. My brain stopped growing a long time ago.

 
fxsaber:

Here ( and here) is very good code. But I don't understand it. My brain stopped growing a long time ago.

The topics are complicated. Not everyone will understand them,) not to mention the code.

 

Oh, what a topic... And without me... It's not good... Gotta speak up.

Regarding the article in the title - the correct premise (the code must be deterministic as much as possible) is used in a very silly way, comparing the addition operation and the accumulation operation as examples. And the conclusion is that the addition operation is deterministic, it always returns the same result, while accumulation is not, because the result is always different.

But, excuse me... They are different operations, and in both cases the results are perfectly correct, and exactly what is expected from addition and accumulation respectively !

Even the random number generator example - also cannot be called "non-deterministic" if one considers that it is an operation with a random component.

As it seems to me, the whole non-deterministic thing is that the author expects from the code not at all what the code is intended for.


And the second thing is code readability - I find the "question mark" operation very harmful and difficult to understand. Replacing the "question" with a conditional operator yields executable code with absolutely the same efficiency. In this case, the source code becomes noticeably more voluminous, but also much clearer. Which I think is a big plus.

I always try to split all those numerous logical expressions into a series of separate operations with intermediate results. Even if such code results in less efficient executable code, the benefit of better understanding is, in my opinion, much more important.

 

and this is the realm of Christmas tree-oriented programming :-)

void OnStart()
  {
   if(condition)
     {
      if(other_condition)
        {
         for(loop_stmt)
           {
            if(wtf)
              {
               while(repeats)
                 {
                  if(oh_shit)
                    {
                     if(really_fck)
                       {
                        deep_nesting();
                       }
                    }
                 }
              }
           }
        }
     }
  }
 
Maxim Kuznetsov:

and this is the realm of Christmas tree-oriented programming :-)

If the conditions are short expressions, that's fine. Although, you can separate them into functions.

And in reverse brackets in such cases I always put a comment in the opening bracket header to make it clear.

 
fxsaber:

With one style of writing, you know for sure the compiler will do the right thing. With another, all you have to do is trust that the compiler is smarter.

There will be no difference in the execution of this

if ( a() ) return(true);
if ( b() ) return(true);
return(false);  

and this:

return( a() || b() );

I'm all for easy to read and debug code.

 
Andrey Khatimlianskii:

There will be no difference in doing this:

and this:

I'm all for easy to read and debug code.

I don't like this design at all in terms of readability and clutter

if ( a() ) return(true);
if ( b() ) return(true);
return(false);