You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
So, if an order is closed, it must be "crossed out" of the array. In such cases, I used to copy the array "into itself" and reduce the size by one.
In such a case, I would write a non-existent ticket into the array -1 , wait until all orders have been closed and delete the entire array (array size should be set to 1).
In this approach, an element of the array of tickets (if there is no order) is checked with only one condition: if(ArrayOfTicket[i] > 0) .....
imho, it is faster than to constantly "shake out" the array
In that case, I would write a non-existing ticket into array -1 , wait until all orders were closed and delete the entire array (array size = 1).
In this approach, an element of the array of tickets (if there is no order) is checked with only one condition: if(ArrayOfTicket[i] > 0) .....
imho, it is faster than to constantly "shake out" the array
I don't get it... What difference does it make whether to delete element by element or check the indices of non-existent orders... the array is shaken anyway...
Anyway, as they said on the news today, it's impossible to patent flavour. Floureshers only differ in colour, but they all taste the same.
I don't get it... What difference does it make whether you want to delete items one by one or check indexes of non-existing orders... The array gets overrun anyway...
Anyway, as they said on the news today, it's impossible to patent flavour. Floureshers are only different in colour, but they all taste the same.
Deleting an element implies copying the remaining elements of the array, I do not delete elements of the array, I mark non-existing elements (tickets) with value -1, and I delete an array of tickets when there are no market orders
As for flotation markers, it is definitely true, it depends on the problem, in principle, there are usually 2 solutions during optimization:
- either add complexity to the algorithm, but save memory and computing resources of the PC
- or simplify the algorithm and save computing resources but waste memory
The checksum is not correct, if there is 0 in the array there may be an error
Nikitin's variant works on just such an error.
The checksum is not correct, if there is 0 in the array there may be an error
Nikitin's variant works just for such an error.
Yes, you are right. Only Nikitin additionally threw out zero elements. That is why his code looked as if it were faulty. Actually it was solving the task you had originally set.
If you document his check for null elements, the result is the same:
Again, the checksum now takes into account the order of the elements, it did not before.
Yes, you are right. Only Nikitin was throwing out additional null elements as well. That's why his code looked as if it was wrong. Actually it was solving the task you had originally set.
If you document his check for null items, the result is the same:
Again, the checksum now takes into account the order of the elements, it didn't before.
By the way, if the order is very important, you can add ArraySort at the end of my variant and see how effective ArraySort is at all.
I am now interested in another question, which I can't find an answer to.
Maybe someone can explain why this variant from Kuznetsov's code:
works more than twice as fast as this one, which does exactly the same thing:
What are the wonders of the compiler?
Is it possible that for such a design:
while(arr[i]!=x && i<j) i++;
does the compiler find some special assembler lookup command for the processor?
Is anyone strong on modern processor commands?
By the way, if the order is very important, I can add ArraySort at the end of my version, at the same time let's see how effective ArraySort is at all.
I've tried it. It's quite a costly function. It's easier to throw away afterwards, though. All the necessary ones go in a row.
Yes, you are right. Only Nikitin was throwing null items in addition. That's why his code looked wrong. Actually it was the task you had defined at the very beginning.
If you document his check for null items, the result is the same:
:
works more than twice as fast as one that does exactly the same thing:
the optimiser is irrelevant - the comparisons are less than half...