Generic Class Library - bugs, description, questions, usage and suggestions - page 9
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I should add that I used two functions and one array in my solution. No pointers or connections.
Your solution is no good. You already have a 100x100x255 array reserved for 100 words, i.e. 2,550,000 cells! And if you have 10 000 000 words, the memory limit in 32-bit systems will be reached. And what if there are more than 100 collisions? How many words begin with the letter S, and how many with the letter P? - Clearly more than 100, so why shouldn't we store them?
I.e. for each task you need to find the golden mean between dictionary size (RAM) and computational complexity of the hash function (CPU).
After writing all this, I thought that there is no practical task to store ticks in the way discussed in the branch. They are sorted by time and stored in a simple array.
It is the same with History Orders/Deals. If we judge by HistorySelect, they are stored in an array by time. And I think, there is nothing there (in the current implementation), that would allow to search for records by ticket or ID.
And all because in case of the named history it is not reasonable to make something like this. Simple array is enough for practice.
Please write succinctly, with no tangles in the form of hats or superfluous entities.
This is a training example, so excuse me, but no. I will note, however, that in the combat version of the code is written this way: as concisely and efficiently as possible (as you like). In training examples the code is written for everyone, as simple and understandable as possible, so that even an unsophisticated user could understand it.
s.s. Caps, ok, I'll remove them.But I should note that in the combat version this is how the code is written: as concisely and efficiently as possible (just the way you like it).
In reality, on projects, the code is written according to the "code of conduct".
And such a variant, as infxsaber's case, is not used:
The reason is banal - the impossibility of convenient debugging.
Your solution is no good. You already have a 100x100x255 array reserved for 100 words, i.e. 2,550,000 cells! And if you have 10 000 000 words, the memory limit in 32-bit systems will be reached. And what if there are more than 100 collisions? How many words begin with the letter S, and how many with the letter P? - Definitely more than 100, so why shouldn't we store them?
I went back to study the suggested code fromRetrog Konow code.
Sorry, but it's total rubbish and total ignorance of hash topics in general, not to mention hash tables.
Why create this coffin on wheels when you can go to hubr and at least get acquainted with the topic of hashes.
Yes, decent implementation of your own hash table is not a trivial task.
But in the proposed codes there is not even a question of any understanding.
Friends. I see the branch has gone quiet.
I don't want to disturb the discussion, so I am voluntarily withdrawing.
The library may contain a lot of interesting things.
Feel free to discuss.
(My solution is in any case worse, because it is 3.2 times slower.)
Your solution is no good. You already have a 100x100x255 array reserved for 100 words, i.e. 2,550,000 cells! And if you have 10 000 000 words, the memory limit in 32-bit systems will be reached. And what if there are more than 100 collisions? How many words begin with the letter S, and how many with the letter P? - Obviously more than 100, so why shouldn't we store them?
Size of the array can easily be changed to suit the size of the dictionary.
I haven't considered infinite variants.
I went back to study the proposed code fromRetag Konow code.
I'm sorry, but this is a complete mess and a total misunderstanding of the topic of hashes in general, not to mention hash tables.
Why create this coffin on wheels when you can go to hubr and at least familiarize yourself with the topic of hashes.
Yes, decent implementation of your own hash table is not a trivial task.
But in offered codes it's not even a question about any understanding.
Your solution is no good. You already have a 100x100x255 array reserved for 100 words, i.e. 2,550,000 cells! And if you have 10 000 000 words, the memory limit in 32-bit systems will be reached. And what if there are more than 100 collisions? How many words begin with the letter S, and how many with the letter P? - Obviously more than 100, so why shouldn't we store them?
In my version, it is unlikely that there will be more than 100 collisions. Can you think of more than 100 words that start with the same letter and have the same number of letters?
(except for the options "text 1", "text 2", "text 3", "text 4", "text 5"...)